Redefining Software Scalability for the Network Infrastructure
|
|
- Arabella Hensley
- 7 years ago
- Views:
Transcription
1 WHITE PAPER: Redefining Software Scalability for the Network Infrastructure By Paul N. Leroux, Technology Analyst, QNX Software Systems Ltd. In their efforts to build a comprehensive range of networking products, many equipment manufacturers have invested in an equally wide range of operating systems (OSs).The results are predictable: code can t be reused across products, engineers can t move quickly from one project to another, and the networking products themselves can t offer end-to-end consistency of software services and management tools much to the customer s inconvenience. In this paper, we look at how a microkernel OS based on network-transparent IPC can address these issues by allowing applications to be coded once, then deployed across entire product lines. With this OS architecture, the same application can run on a single-processor device, be partitioned across a cluster of loosely coupled processors, or run on an SMP system, all without recoding or relinking. The net effect: less development effort, reduced testing, greater product consistency, and higher return on investment. The High Cost of OS Ownership For companies building carrier-class networking equipment, the ability to reuse software across an entire product line holds immense commercial advantages. For example, if applications and system software deployed in a core network element can, without modification, be reused in edge or aggregation devices, then the equipment vendor can achieve both higher return on software investment and faster time-to-market. Nonetheless, in their efforts to build a wide range of network elements, many equipment manufacturers have had to invest in an equally wide range of operating systems (OSs). It s common for a manufacturer to use 5, 10, even 20 OSs, each with different tools, different APIs, and different maintenance problems. The consequences are predictable. More often than not, code can t be reused across projects, and engineers have to learn a new OS and new tools when moving from one project to another. Return on code investment is, to say the least, limited, as is the ability to deploy multiple products quickly. The customer is also affected. Since software varies from product to product, so can interfaces and management tools. The skills that the customer has acquired for using one product don t always apply to other, similar, products.the end result: higher cost of ownership. The Demand for Massive Scalability But what if all those OSs weren t necessary? What if one OS could let you use the same code, tools, and APIs and by extension the For companies building carrier-class networking equipment, the ability to reuse software across an entire product line holds immense commercial advantages. same developers for everything from edge devices to carrierclass equipment? More to the point, what if the OS could let you reuse application binaries, not just source code, across complete product lines? A tall order.the OS would, in fact, have to be massively scalable. For example, it would need to: address an enormous range of memory configurations everything from a few hundred kilobytes to several gigabytes have the ability to coordinate hundreds, if not thousands, of simultaneous software processes 1 allow the same applications and drivers to run on a single processor, across a network of loosely coupled processors, or on a tightly coupled SMP system all without recoding Scalability Across Loosely Coupled Processors Conventional OS architectures fall short on all these counts particularly the last. Let s look at an example. In the distributed architecture of a modern high-end router, network growth is, in theory, easily accommodated: you simply add more line cards, each capable of making its own routing decisions. On one hand, this decentralization avoids the bottleneck of a single routing processor. On the other hand, many line cards attempting to communicate simultaneously with the main processor can quickly overload the system bus. The router, as a result, can t scale to handle increased traffic even though it has the raw processing power to do so. One solution: move software intelligence, such as the routing database, off the main processor and onto the line-card processors, thereby freeing up the system bus. Unfortunately, conventional OS architectures would make moving the database difficult, for several reasons. First, most OSs don t provide network-transparent interprocess communication (IPC). So, if you split up an application s components across different s, you must also add network-specific code so those components can continue talking to each other. In our case, you d have to recode the database, along with any software modules it communicates with. As an added complication, most or all software modules in conventional RTOSs are bound to the kernel.so,to move the database process from the main processor to the line card, you d probably have to create, and test, two new kernel images one for the line card and one for the main. 1 To enable high system availability, the OS should allow virtually any of these processes be it an application, driver, or OS module to be upgraded or restarted dynamically, without interruption of service. 68 VOLUME 3, SPRING 2002
2 Of course, similar problems would occur if, say, you tried to move an application distributed across multiple processors to a lowerend, single-processor product.the application would, in effect, be locked in to the current design. With network-transparent IPC, any process can be moved from one to another, without recoding the process itself or any other processes it communicates with. Likewise, the various processes that make up an application can either run on a single or be distributed across multiple loosely coupled s, again without recoding. Unlocking the design: The QNX approach The QNX realtime OS (RTOS) sidesteps these problems in two ways. First, it uses a true microkernel architecture that decouples applications, protocol stacks, drivers, and even high-level OS services (e.g. file systems) from the OS kernel.as a result, every software module can be an independent, MMU-protected process whose binary can be moved, without relinking, from one to another. No kernel reconfiguration or retesting required. protected Microkernel protected GUI Manager File System Application Device I/O Manager HARDWARE Application Network Other... Graphics With a true microkernel OS architecture, every driver, application, and protocol stack runs as an separate, MMU-protected process. (This also known as universal process model architecture, or UPM.) Second, the QNX RTOS provides a global interface message passing that operates identically in both local and networkremote cases. As a result, any process or thread on a given can transparently access any resource associated with any other. No networking code required. From the application s perspective, there s simply no difference between a local or remote resource. In fact, an application would need special code to tell whether a resource be it a database, file, or I/O device resides on the local or on some other on the network. 2 Higher ROI and reliability What does this mean? Instead of having islands of computing, where each processor is effectively isolated, you now have a "virtual supercomputer" model, where messages flow freely across processor boundaries. In the case of our router example, this network transparency neatly removes a limit on scalability: the database can be moved as is to another processor. Increased scalability aside, this approach provides: Better return on investment Programmers can design an application just once; they don t have to recode (and recompile and relink and retest) if the application has to be moved or partitioned across different processors. Greater consistency across products Many of the same programs in fact, the same binaries performing administration and control functions in, say, a backbone router can be reused as is in SOHO devices.as a result, network administrators can work with the same interfaces and management tools across a wide spectrum of networking equipment. Significantly greater reliability and confidence Since the same software can be used across both higher- and lower-end products, improvements derived from field-testing one device can be directly applied to the device s smaller (or larger) cousins. Reliability and product quality improve across the entire product line. Freedom to introduce new network architectures Since applications don t need network-specific code, new network architectures, using various hardware and protocols, can be introduced without having to recode the applications. Simply put, applications don t "care" what protocol or physical medium they communicate over it could be today or a backplane bus tomorrow. Network-wide I/O namespace In addition to network-transparent message passing, the QNX RTOS shields applications from networking issues by allowing all resources database services, network connections, I/O devices, and so on to be viewed and handled as files. For example, if the database manager in the above example wishes to provide its database services to other processes, it can register a unique pathname in the network-wide I/O namespace. Any client application that wishes to use those services simply issues standard POSIX calls open(), read(), write(), and so on on 2 For an in-depth discussion on how QNX message passing enables both network transparency and a high level of realtime performance, see the QNX RTOS v6 System Architecture Guide on VOLUME 3, SPRING
3 that pathname.the database manager will then take appropriate action based on the call made by the application. With this approach, it doesn t matter which the client is running on; likewise, the client doesn t need to know where the database manager resides.the client simply writes to a pathname and the OS automatically routes the request to the appropriate process. An Alternate Solution: Create a Load-sharing Manager With this facility in mind, let s return to our example and look at another approach to handling the traffic from multiple line cards. This time, instead of moving the database, we could implement two main processors, each mirroring the database.we could then create a load-sharing manager that would distribute requests coming from line cards across the two processors. Besides handling a larger number of line cards, this approach could also provide redundancy in case one main processor or one of the databases failed. In this case, the load-sharing manager could automatically shunt all requests to the remaining database, until the failed database recovered. The important thing here is that existing applications don t have to be recoded. For example, if a process on a line card makes a request of the database, it would use the same pathname, regardless of which processor might actually handle the request. The load-balancing manager would decide which processor the request goes to, without involving the application. increase actual bandwidth. For example, we could connect the processors via multiple links whether those processors talk over a switch, system bus, LAN, serial link, or any combination thereof. Unfortunately, conventional OS architectures don t offer seamless support for using multiple links over different types of media. In fact, since interprocess communication (IPC) is typically implemented "by hand" for each protocol, trying to make every application aware of multiple links with each link potentially handled by a different protocol is daunting at best. To address this problem, the QNX RTOS provides inherent support for multiple links, again without any need for special application code. In fact, this capability provides not only higher throughput,but also network fault-tolerance.for example,you can choose from the following classes of service: Load-balance Queue packets on the link that will deliver them the fastest, based on current load and link capacity.this policy uses the combined service of all links to maximize throughput and allows service to degrade gracefully if any link fails. If a link does fail, periodic maintenance packets are tried on that link to detect recovery. When the link recovers, it s placed back into the pool of available links. Redundant Send every packet over all links simultaneously. If a packet on link A arrives before the same packet on link B, the packet on link A "wins." Redundant packets that arrive later are quietly dropped. With this policy,service can continue without a stutter even if one link fails. Sequential Send out all packets over one link until it goes down, at which point use the second (or third or fourth) link. (This option doesn t provide higher throughput, but does offer fault-tolerance.) Network-transparent IPC simplifies the design of 2N redundant systems, since applications don t have to be coded to know which or how many s a service resides on. Requests to a mirrored database, for example, can be handled by a separate load-sharing manager, leading to a more scalable, cleanly partitioned design. Load-Balancing (active/active) Fiber Packet C Packet B Packet E Packet D Scalable Bandwidth through Multiple Network Links So far, we ve looked at a couple of ways in which network-transparent IPC can help us handle greater network traffic, and thereby increase scalability. Sometimes, however, the only solution is to Other Load-balancing Packets travel on the link that will deliver them the fastest 70 VOLUME 3, SPRING 2002
4 Preferred Same as sequential, but fall back to load-balancing if the specified link can t be used;that is,use all available links to reach the remote node. Redundant (active/active) Fiber Other Redundant Packets travel across all links simultaneously. If one link fails, service can continue without a stutter. number of product configurations. For example, a database program could be talking to client programs running on another bus. In another installation, the exact same client programs could be running on an entirely different machine connected by multiple links. And in yet another (low-end) machine, the clients and the database could be on the same processor. Neither the database nor the client processes would know the difference. Scalability Across Tightly Coupled Processors (SMP) In many networking devices, the workload for the control-plane has ballooned to the point where even the fastest can t keep up. For instance, in a high-end router, the must handle compute-intensive protocols such as OSPF,maintain a routing database of 500,000 or more entries, perform OA&M functions, process SNMP packets, download a subset of the routing table to each line card as well as handle any new network services coming down the pipe.with network bandwidth doubling at twice the speed of performance, the problem shows no signs of letting up. To meet these computational demands, more and more systems designers are distributing the workload across multiple s, Sequential (active/standby) Device driver File system Database Hot swap manager Packet C Packet B OS microkernel Maintenance bus Sequential Packets travel on the primary link. If that link fails, packets are automatically rerouted to the secondary link, and so on. Importantly, the QNX resource manager, Qnet, that provides these services is abstracted from the actual transport layer. It doesn t know, or care, whether the connections are fiber,, serial, and so on. Nor do user applications.the specifics of the physical transport are handled by a separate driver that talks directly to the hardware.this approach provides: Freedom to mix and match interfaces The designer can mix and match network links according to the needs of the design. One link could be fiber, the second serial, and so on. No special application coding is needed.the same applies for links that connect processors residing in different machines. One link could be ATM, the second ISDN, the third 100Mb/sec, and so on. Better code reuse across products Since applications distributed across processors don t have to know how many links, or what kind of links, exist between the processors,the same application binaries can be reused across any High-bandwidth memory bus The QNX RTOS conforms to the Intel MultiProcessor Specification and can support up to 8 Pentium or Pentium Pro processors. using symmetric multi-processing (SMP). SMP is often called the "shared everything" approach to multi-processing, since the multiple s share the same board,memory,i/o,and operating system (OS).In fact,this shared approach contributes to one of SMP s key advantages: low cost. For instance, when scaling from one to two s, you still only use one processor board, not two you effectively double your processing power without paying for additional support chips and without taking up an additional slot in the chassis. Nonetheless, before choosing an OS to implement SMP, the systems designer should ensure that the OS will allow the same software ideally, the same tested binaries to be reused across both single- and SMP members of a product family. This, in turn, will ensure higher return on investment, end-to-end software consistency across the product line, and, importantly, higher reliability. continued on page 74 VOLUME 3, SPRING
5 continued from page 71 There s another issue.while SMP can boost performance dramatically, the law of diminishing returns can come into play as multiple processors contend for the same memory subsystem. So it s critical that the OS used to implement SMP doesn t add any unnecessary overhead on top of these natural barriers. That s a problem, since SMP is commonly associated with large, monolithic OSs used in enterprise server roles. Because the kernels in these OSs contain the bulk of OS services,adding SMP support typically requires large numbers of performance-robbing modifications and specialized spinlocks throughout the kernel code.also, since all device drivers run in the kernel space, adding SMP support means modifying each driver as well. In fact, one reason SMP isn t used more frequently is the difficulty of implementing it in software. Consequently, designers must often deploy limited implementations,where only certain routines are allowed to run on the second processor, resulting in modest performance gains just 10 to 30 per cent. No recoding required: the microkernel approach An OS with a microkernel architecture, such as the QNX RTOS, helps designers avoid the above problems. Compared to monolithic OS kernels, the QNX microkernel is extremely small, since most OS-level services (file systems,drivers,protocol stacks,and so on) exist as user programs that run outside the kernel space. Consequently,the kernel modifications required for SMP are equally small:just a few additional kilobytes of code.in fact,only the kernel has to be modified.all other multithreaded services file systems, drivers, applications can gain the performance advantages of SMP without the need for code changes. Since so little code has to be added to the kernel,this approach to implementing SMP incurs negligible overhead. And compared to monolithic OS models, it s inherently more reliable, since there s simply less to go wrong. Single SMP card Loosely coupled cluster With a microkernel architecture, drivers, protocol stacks, and OS modules can migrate from a single-processor device to an SMP system or be distributed across a network of loosely coupled SMP devices without recoding. Scalability on demand Of course, for the networking equipment manufacturer, it s equally important that custom applications and drivers not just offthe-shelf OS modules can move unmodified between singleprocessor and SMP systems. And, in fact, with this microkernel approach, there s no need to recode or relink custom software, provided the software has been coded to be "SMP safe." 3 Combining SMP with real time To optimize cache performance, the QNX RTOS supports processor affinity: the kernel will always try to dispatch a thread to the where the thread last ran. To further enhance performance, QNX provides an affinity mask, which can, for example, let you relegate all non-realtime threads to a particular. The remaining s would then always remain free to execute timecritical processes. In general, however, this approach isn t necessary, since QNX s realtime scheduler will always preempt a lowerpriority thread immediately when a higher-priority thread becomes ready. In fact,thanks to these preemptive capabilities,the QNX RTOS can help an SMP device handle an increase in system load,without the cost (and complexity) of adding more s.that s because timecritical tasks, such as routing table updates, are always executed in a predictable time frame, no matter how many other processes demand time. Response times can remain constant, even as overall system load increases. Also, as an RTOS, QNX can deliver context-switch speeds for threads and processes in the sub-microsecond range orders of magnitude faster than OSs conventionally used in SMP server roles.as a result, s waste much less time switching from one thread to another and have more to time to execute computeintensive applications. True Scalability: The Commercial Advantage Little or no recoding.that,in a nutshell,is the hallmark of true scalability. As we ve seen, it s not enough for an OS to simply scale down (or up) in memory footprint, or to support SMP. Rather, it must also allow software applications to move seamlessly across networking products from edge devices to gigabit routers without redesign or recoding, and with minimal retesting. This level of scalability isn t merely desirable. Given the massive range of products a networking equipment manufacturer may offer, it is, in fact, an immense commercial advantage whether you consider development costs, time-to-market, or customer satisfaction. Code can be reused rather than redesigned. Engineers can move freely between projects, without retraining. And customers can enjoy the convenience and lower cost of ownership of using the same set of tools and interfaces across an entire product line. 3 For the most part, this simply means that applications use standard POSIX primitives to control access to shared data structures. Of course, to reap the full benefits of SMP, the application should be designed with enough parallelism achieved through multiple independent threads to keep multiple s busy. 74 VOLUME 3, SPRING 2002
evm Virtualization Platform for Windows
B A C K G R O U N D E R evm Virtualization Platform for Windows Host your Embedded OS and Windows on a Single Hardware Platform using Intel Virtualization Technology April, 2008 TenAsys Corporation 1400
More informationWhat is best for embedded development? Do most embedded projects still need an RTOS?
RTOS versus GPOS: What is best for embedded development? Do most embedded projects still need an RTOS? It is a good question, given the speed of today s high-performance processors and the availability
More informationUsing Multipathing Technology to Achieve a High Availability Solution
Using Multipathing Technology to Achieve a High Availability Solution Table of Contents Introduction...3 Multipathing Technology...3 Multipathing I/O Implementations...5 Storage Redundancy...5 Infortrend
More informationPARALLELS CLOUD SERVER
PARALLELS CLOUD SERVER An Introduction to Operating System Virtualization and Parallels Cloud Server 1 Table of Contents Introduction... 3 Hardware Virtualization... 3 Operating System Virtualization...
More informationCloud Server. Parallels. An Introduction to Operating System Virtualization and Parallels Cloud Server. White Paper. www.parallels.
Parallels Cloud Server White Paper An Introduction to Operating System Virtualization and Parallels Cloud Server www.parallels.com Table of Contents Introduction... 3 Hardware Virtualization... 3 Operating
More informationThe Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment
The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment Introduction... 2 Virtualization addresses key challenges facing IT today... 2 Introducing Virtuozzo... 2 A virtualized environment
More informationTechnology Insight Series
Evaluating Storage Technologies for Virtual Server Environments Russ Fellows June, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved Executive Summary
More informationOPTIMIZING SERVER VIRTUALIZATION
OPTIMIZING SERVER VIRTUALIZATION HP MULTI-PORT SERVER ADAPTERS BASED ON INTEL ETHERNET TECHNOLOGY As enterprise-class server infrastructures adopt virtualization to improve total cost of ownership (TCO)
More informationSAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
More informationCloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009
Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009 1 Arista s Cloud Networking The advent of Cloud Computing changes the approach to datacenters networks in terms of throughput
More informationScala Storage Scale-Out Clustered Storage White Paper
White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current
More informationLustre Networking BY PETER J. BRAAM
Lustre Networking BY PETER J. BRAAM A WHITE PAPER FROM CLUSTER FILE SYSTEMS, INC. APRIL 2007 Audience Architects of HPC clusters Abstract This paper provides architects of HPC clusters with information
More informationOpenMosix Presented by Dr. Moshe Bar and MAASK [01]
OpenMosix Presented by Dr. Moshe Bar and MAASK [01] openmosix is a kernel extension for single-system image clustering. openmosix [24] is a tool for a Unix-like kernel, such as Linux, consisting of adaptive
More informationIntel DPDK Boosts Server Appliance Performance White Paper
Intel DPDK Boosts Server Appliance Performance Intel DPDK Boosts Server Appliance Performance Introduction As network speeds increase to 40G and above, both in the enterprise and data center, the bottlenecks
More informationAchieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.
More informationRelational Databases in the Cloud
Contact Information: February 2011 zimory scale White Paper Relational Databases in the Cloud Target audience CIO/CTOs/Architects with medium to large IT installations looking to reduce IT costs by creating
More informationDistribution One Server Requirements
Distribution One Server Requirements Introduction Welcome to the Hardware Configuration Guide. The goal of this guide is to provide a practical approach to sizing your Distribution One application and
More informationSiteCelerate white paper
SiteCelerate white paper Arahe Solutions SITECELERATE OVERVIEW As enterprises increases their investment in Web applications, Portal and websites and as usage of these applications increase, performance
More informationAdvanced Core Operating System (ACOS): Experience the Performance
WHITE PAPER Advanced Core Operating System (ACOS): Experience the Performance Table of Contents Trends Affecting Application Networking...3 The Era of Multicore...3 Multicore System Design Challenges...3
More informationEnterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011
Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Executive Summary Large enterprise Hyper-V deployments with a large number
More informationOracle9i Release 2 Database Architecture on Windows. An Oracle Technical White Paper April 2003
Oracle9i Release 2 Database Architecture on Windows An Oracle Technical White Paper April 2003 Oracle9i Release 2 Database Architecture on Windows Executive Overview... 3 Introduction... 3 Oracle9i Release
More informationThe Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage
The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage sponsored by Dan Sullivan Chapter 1: Advantages of Hybrid Storage... 1 Overview of Flash Deployment in Hybrid Storage Systems...
More informationAgenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.
Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance
More informationCloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com
Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
More informationAvaya P333R-LB. Load Balancing Stackable Switch. Load Balancing Application Guide
Load Balancing Stackable Switch Load Balancing Application Guide May 2001 Table of Contents: Section 1: Introduction Section 2: Application 1 Server Load Balancing Section 3: Application 2 Firewall Load
More informationFault Tolerant Servers: The Choice for Continuous Availability on Microsoft Windows Server Platform
Fault Tolerant Servers: The Choice for Continuous Availability on Microsoft Windows Server Platform Why clustering and redundancy might not be enough This paper discusses today s options for achieving
More informationHow To Virtualize A Storage Area Network (San) With Virtualization
A New Method of SAN Storage Virtualization Table of Contents 1 - ABSTRACT 2 - THE NEED FOR STORAGE VIRTUALIZATION 3 - EXISTING STORAGE VIRTUALIZATION METHODS 4 - A NEW METHOD OF VIRTUALIZATION: Storage
More informationAn Oracle White Paper November 2010. Oracle Real Application Clusters One Node: The Always On Single-Instance Database
An Oracle White Paper November 2010 Oracle Real Application Clusters One Node: The Always On Single-Instance Database Executive Summary... 1 Oracle Real Application Clusters One Node Overview... 1 Always
More informationtheguard! ApplicationManager System Windows Data Collector
theguard! ApplicationManager System Windows Data Collector Status: 10/9/2008 Introduction... 3 The Performance Features of the ApplicationManager Data Collector for Microsoft Windows Server... 3 Overview
More informationChapter 1: Operating System Models 1 2 Operating System Models 2.1 Introduction Over the past several years, a number of trends affecting operating system design are witnessed and foremost among them is
More informationClient/Server Computing Distributed Processing, Client/Server, and Clusters
Client/Server Computing Distributed Processing, Client/Server, and Clusters Chapter 13 Client machines are generally single-user PCs or workstations that provide a highly userfriendly interface to the
More informationParallels Virtuozzo Containers
Parallels Virtuozzo Containers White Paper Top Ten Considerations For Choosing A Server Virtualization Technology www.parallels.com Version 1.0 Table of Contents Introduction... 3 Technology Overview...
More informationFatPipe Networks www.fatpipeinc.com
XTREME WHITE PAPERS Overview The growing popularity of wide area networks (WANs), as a means by which companies transact vital information with clients, partners, and colleagues, is indisputable. The business
More informationDistributed Systems LEEC (2005/06 2º Sem.)
Distributed Systems LEEC (2005/06 2º Sem.) Introduction João Paulo Carvalho Universidade Técnica de Lisboa / Instituto Superior Técnico Outline Definition of a Distributed System Goals Connecting Users
More informationBuilding Reliable, Scalable AR System Solutions. High-Availability. White Paper
Building Reliable, Scalable Solutions High-Availability White Paper Introduction This paper will discuss the products, tools and strategies available for building reliable and scalable Action Request System
More informationVirtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies
Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies Kurt Klemperer, Principal System Performance Engineer kklemperer@blackboard.com Agenda Session Length:
More informationSolution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware
Introduction By leveraging the inherent benefits of a virtualization based platform, a Microsoft Exchange Server 2007 deployment on VMware Infrastructure 3 offers a variety of availability and recovery
More informationSimplify Data Management and Reduce Storage Costs with File Virtualization
What s Inside: 2 Freedom from File Storage Constraints 2 Simplifying File Access with File Virtualization 3 Simplifying Data Management with Automated Management Policies 4 True Heterogeneity 5 Data Protection
More informationAPPLICATION NOTE. Benefits of MPLS in the Enterprise Network
APPLICATION NOTE Benefits of MPLS in the Enterprise Network Abstract As enterprises evolve to keep pace with the ever-changing business climate, enterprises networking needs are becoming more dynamic.
More informationDesign and Implementation of the Heterogeneous Multikernel Operating System
223 Design and Implementation of the Heterogeneous Multikernel Operating System Yauhen KLIMIANKOU Department of Computer Systems and Networks, Belarusian State University of Informatics and Radioelectronics,
More informationThe Ultimate in Scale-Out Storage for HPC and Big Data
Node Inventory Health and Active Filesystem Throughput Monitoring Asset Utilization and Capacity Statistics Manager brings to life powerful, intuitive, context-aware real-time monitoring and proactive
More informationWindows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...
More informationUsing In-Memory Computing to Simplify Big Data Analytics
SCALEOUT SOFTWARE Using In-Memory Computing to Simplify Big Data Analytics by Dr. William Bain, ScaleOut Software, Inc. 2012 ScaleOut Software, Inc. 12/27/2012 T he big data revolution is upon us, fed
More informationPost-production Video Editing Solution Guide with Microsoft SMB 3 File Serving AssuredSAN 4000
Post-production Video Editing Solution Guide with Microsoft SMB 3 File Serving AssuredSAN 4000 Dot Hill Systems introduction 1 INTRODUCTION Dot Hill Systems offers high performance network storage products
More informationNeverfail Solutions for VMware: Continuous Availability for Mission-Critical Applications throughout the Virtual Lifecycle
Neverfail Solutions for VMware: Continuous Availability for Mission-Critical Applications throughout the Virtual Lifecycle Table of Contents Virtualization 3 Benefits of Virtualization 3 Continuous Availability
More informationHow Solace Message Routers Reduce the Cost of IT Infrastructure
How Message Routers Reduce the Cost of IT Infrastructure This paper explains how s innovative solution can significantly reduce the total cost of ownership of your messaging middleware platform and IT
More informationRadware ADC-VX Solution. The Agility of Virtual; The Predictability of Physical
Radware ADC-VX Solution The Agility of Virtual; The Predictability of Physical Table of Contents General... 3 Virtualization and consolidation trends in the data centers... 3 How virtualization and consolidation
More informationHow In-Memory Data Grids Can Analyze Fast-Changing Data in Real Time
SCALEOUT SOFTWARE How In-Memory Data Grids Can Analyze Fast-Changing Data in Real Time by Dr. William Bain and Dr. Mikhail Sobolev, ScaleOut Software, Inc. 2012 ScaleOut Software, Inc. 12/27/2012 T wenty-first
More informationVirtualization. Dr. Yingwu Zhu
Virtualization Dr. Yingwu Zhu What is virtualization? Virtualization allows one computer to do the job of multiple computers. Virtual environments let one computer host multiple operating systems at the
More informationBig data management with IBM General Parallel File System
Big data management with IBM General Parallel File System Optimize storage management and boost your return on investment Highlights Handles the explosive growth of structured and unstructured data Offers
More informationReducing Storage TCO With Private Cloud Storage
Prepared by: Colm Keegan, Senior Analyst Prepared: October 2014 With the burgeoning growth of data, many legacy storage systems simply struggle to keep the total cost of ownership (TCO) in check. This
More informationWindows Server Performance Monitoring
Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly
More informationEasier - Faster - Better
Highest reliability, availability and serviceability ClusterStor gets you productive fast with robust professional service offerings available as part of solution delivery, including quality controlled
More informationMaking the Move to Desktop Virtualization No More Reasons to Delay
Enabling the Always-On Enterprise Making the Move to Desktop Virtualization No More Reasons to Delay By Andrew Melmed Director of Enterprise Solutions, Sanbolic Inc. April 2012 Introduction It s a well-known
More informationChapter 2 TOPOLOGY SELECTION. SYS-ED/ Computer Education Techniques, Inc.
Chapter 2 TOPOLOGY SELECTION SYS-ED/ Computer Education Techniques, Inc. Objectives You will learn: Topology selection criteria. Perform a comparison of topology selection criteria. WebSphere component
More informationAuspex Support for Cisco Fast EtherChannel TM
Auspex Support for Cisco Fast EtherChannel TM Technical Report 21 Version 1.0 March 1998 Document 300-TC049, V1.0, 980310 Auspex Systems, Inc. 2300 Central Expressway Santa Clara, California 95050-2516
More informationJune 2009. Blade.org 2009 ALL RIGHTS RESERVED
Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS
More informationAzul Compute Appliances
W H I T E P A P E R Azul Compute Appliances Ultra-high Capacity Building Blocks for Scalable Compute Pools WP_ACA0209 2009 Azul Systems, Inc. W H I T E P A P E R : A z u l C o m p u t e A p p l i a n c
More informationWindows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described
More informationCHAPTER 15: Operating Systems: An Overview
CHAPTER 15: Operating Systems: An Overview The Architecture of Computer Hardware, Systems Software & Networking: An Information Technology Approach 4th Edition, Irv Englander John Wiley and Sons 2010 PowerPoint
More informationCisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database
Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database Built up on Cisco s big data common platform architecture (CPA), a
More informationRadware ADC-VX Solution. The Agility of Virtual; The Predictability of Physical
Radware ADC-VX Solution The Agility of Virtual; The Predictability of Physical Table of Contents General... 3 Virtualization and consolidation trends in the data centers... 3 How virtualization and consolidation
More informationFull and Para Virtualization
Full and Para Virtualization Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF x86 Hardware Virtualization The x86 architecture offers four levels
More informationBest Practices for Implementing iscsi Storage in a Virtual Server Environment
white paper Best Practices for Implementing iscsi Storage in a Virtual Server Environment Server virtualization is becoming a no-brainer for any that runs more than one application on servers. Nowadays,
More informationGetting More Performance and Efficiency in the Application Delivery Network
SOLUTION BRIEF Intel Xeon Processor E5-2600 v2 Product Family Intel Solid-State Drives (Intel SSD) F5* Networks Delivery Controllers (ADCs) Networking and Communications Getting More Performance and Efficiency
More informationSolving I/O Bottlenecks to Enable Superior Cloud Efficiency
WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one
More informationSupport a New Class of Applications with Cisco UCS M-Series Modular Servers
Solution Brief December 2014 Highlights Support a New Class of Applications Cisco UCS M-Series Modular Servers are designed to support cloud-scale workloads In which a distributed application must run
More informationMeeting the Five Key Needs of Next-Generation Cloud Computing Networks with 10 GbE
White Paper Meeting the Five Key Needs of Next-Generation Cloud Computing Networks Cloud computing promises to bring scalable processing capacity to a wide range of applications in a cost-effective manner.
More informationVTrak 15200 SATA RAID Storage System
Page 1 15-Drive Supports over 5 TB of reliable, low-cost, high performance storage 15200 Product Highlights First to deliver a full HW iscsi solution with SATA drives - Lower CPU utilization - Higher data
More informationSymmetric Multiprocessing
Multicore Computing A multi-core processor is a processing system composed of two or more independent cores. One can describe it as an integrated circuit to which two or more individual processors (called
More informationCisco Application Networking for IBM WebSphere
Cisco Application Networking for IBM WebSphere Faster Downloads and Site Navigation, Less Bandwidth and Server Processing, and Greater Availability for Global Deployments What You Will Learn To address
More informationChapter 2: OS Overview
Chapter 2: OS Overview CmSc 335 Operating Systems 1. Operating system objectives and functions Operating systems control and support the usage of computer systems. a. usage users of a computer system:
More informationHyperQ Storage Tiering White Paper
HyperQ Storage Tiering White Paper An Easy Way to Deal with Data Growth Parsec Labs, LLC. 7101 Northland Circle North, Suite 105 Brooklyn Park, MN 55428 USA 1-763-219-8811 www.parseclabs.com info@parseclabs.com
More informationLOAD BALANCING IN WEB SERVER
LOAD BALANCING IN WEB SERVER Renu Tyagi 1, Shaily Chaudhary 2, Sweta Payala 3 UG, 1,2,3 Department of Information & Technology, Raj Kumar Goel Institute of Technology for Women, Gautam Buddh Technical
More informationThe Benefits of Virtualizing
T E C H N I C A L B R I E F The Benefits of Virtualizing Aciduisismodo Microsoft SQL Dolore Server Eolore in Dionseq Hitachi Storage Uatummy Environments Odolorem Vel Leveraging Microsoft Hyper-V By Heidi
More informationCisco Application Networking for BEA WebLogic
Cisco Application Networking for BEA WebLogic Faster Downloads and Site Navigation, Less Bandwidth and Server Processing, and Greater Availability for Global Deployments What You Will Learn To address
More informationBrocade Solution for EMC VSPEX Server Virtualization
Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,
More informationComponents of a Computer System
SFWR ENG 3B04 Software Design III 1.1 3 Hardware Processor(s) Memory I/O devices Operating system Kernel System programs Components of a Computer System Application programs Users SFWR ENG 3B04 Software
More informationFrom Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller
White Paper From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller The focus of this paper is on the emergence of the converged network interface controller
More informationDeveloping a dynamic, real-time IT infrastructure with Red Hat integrated virtualization
Developing a dynamic, real-time IT infrastructure with Red Hat integrated virtualization www.redhat.com Table of contents Introduction Page 3 Benefits of virtualization Page 3 Virtualization challenges
More informationMirror File System for Cloud Computing
Mirror File System for Cloud Computing Twin Peaks Software Abstract The idea of the Mirror File System (MFS) is simple. When a user creates or updates a file, MFS creates or updates it in real time on
More informationCommuniGate Pro White Paper. Dynamic Clustering Solution. For Reliable and Scalable. Messaging
CommuniGate Pro White Paper Dynamic Clustering Solution For Reliable and Scalable Messaging Date April 2002 Modern E-Mail Systems: Achieving Speed, Stability and Growth E-mail becomes more important each
More informationPage 1 of 5. IS 335: Information Technology in Business Lecture Outline Operating Systems
Lecture Outline Operating Systems Objectives Describe the functions and layers of an operating system List the resources allocated by the operating system and describe the allocation process Explain how
More informationIBM Enterprise Linux Server
IBM Systems and Technology Group February 2011 IBM Enterprise Linux Server Impressive simplification with leading scalability, high availability and security Table of Contents Executive Summary...2 Our
More informationRed Hat Enterprise linux 5 Continuous Availability
Red Hat Enterprise linux 5 Continuous Availability Businesses continuity needs to be at the heart of any enterprise IT deployment. Even a modest disruption in service is costly in terms of lost revenue
More informationPurpose-Built Load Balancing The Advantages of Coyote Point Equalizer over Software-based Solutions
Purpose-Built Load Balancing The Advantages of Coyote Point Equalizer over Software-based Solutions Abstract Coyote Point Equalizer appliances deliver traffic management solutions that provide high availability,
More informationVirtual PortChannels: Building Networks without Spanning Tree Protocol
. White Paper Virtual PortChannels: Building Networks without Spanning Tree Protocol What You Will Learn This document provides an in-depth look at Cisco's virtual PortChannel (vpc) technology, as developed
More informationFax Server Cluster Configuration
Fax Server Cluster Configuration Low Complexity, Out of the Box Server Clustering for Reliable and Scalable Enterprise Fax Deployment www.softlinx.com Table of Contents INTRODUCTION... 3 REPLIXFAX SYSTEM
More informationVirtualization is set to become a key requirement
Xen, the virtual machine monitor The art of virtualization Moshe Bar Virtualization is set to become a key requirement for every server in the data center. This trend is a direct consequence of an industrywide
More informationMigration Scenario: Migrating Batch Processes to the AWS Cloud
Migration Scenario: Migrating Batch Processes to the AWS Cloud Produce Ingest Process Store Manage Distribute Asset Creation Data Ingestor Metadata Ingestor (Manual) Transcoder Encoder Asset Store Catalog
More informationCisco Unified Computing System: Meet the Challenges of Virtualization with Microsoft Hyper-V
White Paper Cisco Unified Computing System: Meet the Challenges of Virtualization with Microsoft Hyper-V What You Will Learn The modern virtualized data center is today s new IT service delivery foundation,
More informationCloud Infrastructure Foundation. Building a Flexible, Reliable and Automated Cloud with a Unified Computing Fabric from Egenera
Cloud Infrastructure Foundation Building a Flexible, Reliable and Automated Cloud with a Unified Computing Fabric from Egenera Executive Summary At its heart, cloud computing is a new operational and business
More informationTools Page 1 of 13 ON PROGRAM TRANSLATION. A priori, we have two translation mechanisms available:
Tools Page 1 of 13 ON PROGRAM TRANSLATION A priori, we have two translation mechanisms available: Interpretation Compilation On interpretation: Statements are translated one at a time and executed immediately.
More informationSoftware-defined Storage Architecture for Analytics Computing
Software-defined Storage Architecture for Analytics Computing Arati Joshi Performance Engineering Colin Eldridge File System Engineering Carlos Carrero Product Management June 2015 Reference Architecture
More informationVCStack - Powerful Simplicity. Network Virtualization for Today's Business
Network Virtualization for Today's Business Introduction Today's enterprises rely on Information Technology resources and applications, for accessing business-critical information and for day-to-day work.
More informationHBA Virtualization Technologies for Windows OS Environments
HBA Virtualization Technologies for Windows OS Environments FC HBA Virtualization Keeping Pace with Virtualized Data Centers Executive Summary Today, Microsoft offers Virtual Server 2005 R2, a software
More informationChapter 1: Introduction. What is an Operating System?
Chapter 1: Introduction What is an Operating System? Mainframe Systems Desktop Systems Multiprocessor Systems Distributed Systems Clustered System Real -Time Systems Handheld Systems Computing Environments
More informationRunning a Workflow on a PowerCenter Grid
Running a Workflow on a PowerCenter Grid 2010-2014 Informatica Corporation. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise)
More informationCisco Application Networking for Citrix Presentation Server
Cisco Application Networking for Citrix Presentation Server Faster Site Navigation, Less Bandwidth and Server Processing, and Greater Availability for Global Deployments What You Will Learn To address
More informationFTOS: A Modular and Portable Switch/Router Operating System Optimized for Resiliency and Scalability
White PAPER FTOS: A Modular and Portable Switch/Router Operating System Optimized for Resiliency and Scalability Introduction As Ethernet switch/routers continue to scale in terms of link speed and port
More information