Multi-version Software Updates
|
|
|
- Corey Price
- 5 years ago
- Views:
From this document you will learn the answers to the following questions:
What do users often have to do with an old stable version?
What is the name of the new version of the software that we propose?
What do users often have to do with the old stable version?
Transcription
1 Multi-version Software Updates Cristian Cadar Petr Hosek Department of Computing Imperial College London {c.cadar, Abstract Software updates present a difficult challenge to the software maintenance process. Too often, updates result in failures, and users face the uncomfortable choice between using an old stable version which misses recent features and bug fixes, and using a new version which improves the software in certain ways, only to introduce other bugs and security vulnerabilities. In this position paper, we propose a radically new approach for performing software updates: whenever a new update becomes available, instead of upgrading the software to the new version, we instead run the new version in parallel with the old one. By carefully coordinating their executions and selecting the behavior of the more reliable version when they diverge, we can preserve the stability of the old version without giving up the features and bug fixes added to the new version. We are currently focusing on a prototype system targeting multicore CPUs, but we believe this approach could also be deployed on other parallel platforms, such as GPGPUs and cloud environments. Keywords-software updates, multi-version execution I. INTRODUCTION Software updates are an integral part of the software maintenance process, with new software versions being released on a continuous basis. Unfortunately, software updates often result in failures, which makes many users reluctant to incorporate the latest patches made available by developers. As a result, users rely instead on outdated versions, which despite their relative stability, miss recent features and bug fixes and may be plagued by security vulnerabilities. In this position paper, we propose a novel way of performing software updates, which aims to resolve the uncomfortable choice that users often face between using an old stable version which misses recent features and bug fixes, and using a new version which improves the software in certain ways, only to introduce other bugs and security vulnerabilities. Our idea is simple yet effective: whenever a new update becomes available, instead of upgrading the software to the new version, we instead run the new version in parallel with the old one. By carefully coordinating their executions and selecting the behavior of the more reliable version when they diverge, we can preserve the stability of the old version without giving up the features and bug fixes added to the new version. We believe that multi-version software updates are a timely solution in the context of today s computing platforms [4]. In recent years, we have witnessed the emergence of new technology ranging from multicore processors to large-scale data centers which provide an abundance of computational resources and a high degree of parallelism. Furthermore, these resources are often left idle, in which state they consume a large fraction of their full-utilization energy levels. For example, recent studies [1] have shown that servers in a data center usually operate at between 10% and 50% of their maximum utilization; however, even an energy efficient server consumes half its full power when doing virtually no work and for regular servers, this ratio is much worse. Our approach aims to improve the software update process by taking advantage of idle resources such as idle cores on a CPU and idle servers in a data center to run multiple versions of an application in parallel, with the goal of improving the overall reliability and security of the software being upgraded. Our current focus is on multicore processors, but we believe this solution could be adapted to work on other parallel platforms as well. Furthermore, this update mechanism could be extended to work with a large number of versions running in parallel and configured to balance conflicting requirements such as performance, reliability and energy consumption. The rest of this paper is organized as follows. Section II motivates our approach using several real scenarios involving Chrome, Vim, lighttpd and vsftpd. Then, Section III gives an overview of our approach and Section IV discusses the possible ways in which it could be implemented, including an initial prototype that we developed for multicore platforms. Finally, Section V discusses related work and Section VI concludes. II. MOTIVATING SCENARIOS In this section we motivate our approach using existing scenarios involving the Chrome browser and the Vim editor, as well as the lighttpd and vsftpd servers. These correspond to two categories of applications that could benefit from our multi-version software update approach: desktop applications such as web browsers and office tools for which reliability is a key concern; and network servers, with stringent security and dependability requirements.
2 Google Chrome 1 is one of the most widely used web browsers. Even though Chrome releases are tested extensively before deployment, they sometimes introduce new bugs that affect the stability of the browser. A concrete example is version , which introduced a bug that caused Chrome to crash when trying to load certain web pages over SSL. 2 One might argue that in this case the user should downgrade to an older version and wait until the bug is fixed. However, versions immediately preceding suffer from a different bug, 3 which was introduced in version and which crashes Chrome during certain sequences of repeated back and forward navigation. Vim 4 is arguably one of the most popular editors under UNIX. In version , while trying to fix a memory leak, Vim developers introduced a double free bug that caused Vim to crash whenever the user tried to use a path completion feature. This bug made its way into Ubuntu 8.04, affecting a large number of users. 5 lighttpd 6 is a popular web-server used by several hightraffic websites such as YouTube, Wikipedia, and Meebo. Despite its popularity, faulty updates are still a common occurrence in lighttpd. As one example, a patch introduced in April (correctly) fixed the way HTTP ETags are computed. Unfortunately, this fix broke the support for compression, which relied on the previous way in which ETags were computed and resulted in a segmentation fault whenever a client requested HTTP compression. This issue was only diagnosed and reported in March and fixed at the end of April 2010, 9 more than one year after it was introduced, leaving the server vulnerable to possible attacks in between. vsftpd 10 is a fast and secure FTP server for UNIX systems. Version added several new features such as network isolation, but unfortunately also introduced a bug 11 which triggered a segmentation fault whenever a client used the passive FTP mode. This bug made vsftpd practically unusable since the passive mode is being frequently used by clients behind firewalls. Despite being reported several times, this bug was only fixed in version 2.2.1, released more than two months after the bug was introduced. All the scenarios presented above describe software updates which, while trying to add new features or bug fixes, also introduced new bugs that caused the code to crash under certain conditions. Improving the reliability of such updates Multi-version Application Virtualization Framework External Environment Conventional Application Figure 1. A platform running conventional and multi-version applications side by side. is the main goal of our proposed approach: running both the old and the new version in parallel after an update can enable applications to survive more errors, without giving up the new features introduced by the update. III. OVERVIEW AND MAIN CHALLENGES The main goal of our approach is to run multiple versions of an application in parallel, and synchronize their executions so that (1) users are given the illusion that they interact with a single version of the application, (2) the multi-version application is at least as reliable and secure as any of the individual versions in isolation, and (3) the synchronization mechanism incurs a reasonable performance overhead. As shown in Figure 1, our solution requires a form of virtualization framework that would coordinate the execution of multiple application versions, and mediate their interaction with the external environment. Various such frameworks have been designed in the past in the context of running multiple automatically generated variants of a given application in parallel [3], [9], [18], and many of the techniques proposed in prior work can be reused in our context. To be practical, this coordination mechanism has to incur a reasonable overhead on top of native execution and ensure that the overall system is able to scale up and down the number of software versions run in parallel in order to balance conflicting requirements such as performance, reliability, security and energy consumption. One particular challenge for our approach is to detect any divergences between different software versions, resolve them in such a way as to increase the overall reliability of the application, and finally synchronize again the different versions after their executions reconverge to the same behavior. Of course, we also need to handle the case in which the executions of different versions fail to reconverge to the same behavior after sufficient time.
3 Selecting the correct behavior of an application when different versions disagree is of course not possible in the general case without having access to a high-level specification. However, one could (1) focus on universal correctness properties, such as the absence of crashes, and (2) use various heuristics such as majority voting and favoring the latest application versions. For example, our current prototype resolves a divergence by always using the behavior of the version that has not crashed, and favoring the behavior of the latest version in all other cases. In this way, we ensure that the overall application has strictly fewer crashes than any of the individual versions, while still using the new features implemented in the latest version. Note that one key aspect on which our approach relies is having the different versions be alive at all times. This ensures that applications can survive crashes that occur at different points in different versions, but adds the extra challenge of restarting crashed versions. Finally, our approach needs a deployment strategy to decide what versions are run in parallel when the number of available resources (e.g., idle CPU cores) is limited. We envision several options such as keeping the last n released versions (where n is the number of available resources), or keeping several very old stable versions but the exact strategy should be decided on a case-by-case basis. IV. IMPLEMENTATION OPTIONS AND CURRENT PROTOTYPE SYSTEM Our multi-version execution update mechanism can be implemented at multiple levels of abstraction. The simplest, but least flexible approach is to synchronize the different versions at the level of application inputs and outputs. This is particularly suitable for applications that use welldefined protocols such as web servers and web browsers. The main advantage of this option is that it can tolerate large differences between different software versions (in fact, one could even run different implementations, as in [24]), but the main downside is that it is not applicable in the general case, when the input-output specification is not available. A more flexible approach, which we adopted in our prototype implementation, is to synchronize the different versions at the level of system calls. System calls represent the external behavior of an application, as they are the only way in which an application can interact with the surrounding environment. While this option allows fewer changes between different application versions (basically, the external behavior has to stay the same, although various optimizations are possible), it does not require any a priori knowledge of the application s input/output behavior, and is particularly suitable in the context of software updates, where the external application behavior usually remains unchanged. In fact, an empirical study that we conducted on several real applications (Vim, lighttpd, redis) has shown that while the source code of these applications changes on a regular basis, the external behavior changes only infrequently, often remaining stable as the software evolves. An option similar to the system call interposition approach described above is to synchronize versions at the level of library calls. This has a performance advantage, as library functions can be executed only once, propagating the result to all versions. More generally, one can trade off the amount of code that is run (and synchronized) in parallel with the performance overhead achieved by the overall system. For example, one sensible option would be to avoid replicating the parts of the code that are highly trusted, e.g., those that have not been changed in the last several years, or those that can be statically proven to be safe. We have implemented our approach in a prototype system targeted at multicore processors running Linux, using the system call interposition approach. 12 Currently, our prototype fully works with only two application versions, but we are working on adding support for a larger number of versions. As discussed in Section III, our prototype focuses on minimizing the number of crashes introduced via software updates, by using the behavior of the surviving version when one of the versions crashes, and the behavior of the latest version in all other cases. One important aspect of our prototype is its ability to restart a crashed application, which is accomplished by a combination of lightweight OSlevel checkpointing and a form of runtime code patching which uses information from a binary static analysis pass. Our prototype supports a wide range of Linux applications, in many cases with a reasonable performance overhead (21.48% on average for SPEC CINT CPU2006, but up to 17x on some other benchmarks). More importantly, we were able to use our prototype to survive a series of crash bugs in several real applications, such as Coreutils, lighttpd, and redis. However, our prototype still has a series of important limitations, which open up many opportunities for future development. We discuss some of them below. Currently, our prototype intercepts every system call performed by each application version. This has two main disadvantages: first, any changes in system call behavior are flagged as a divergence. 13 We plan to improve this by using an epoch-based approach [22], in which multiple system calls are processed at a time, and their overall equivalence determined in a more flexible way (e.g., two sequences of writes can be considered to be equivalent as long as their overall result is the same). Second, intercepting every system call often has an important impact on performance. We plan to alleviate this problem by using two techniques inspired by existing virtualization technology: (1) we plan to provide multiversion applications with a paravirtualization API [23] 12 An earlier version of our prototype is described in [13]. 13 We optimize this by ignoring several system calls that can be safely replayed from the last checkpoint (e.g., geteuid), but the general problem remains.
4 that would allow them to communicate directly with the virtualization framework; and (2) we intend to combine this API with a binary translation approach [20], that would enable us to dynamically replace certain system calls with more efficient calls into the virtualization framework. The binary translation could also be used to dynamically replace code components that are proven to be safe and do not need to be replicated across multiple versions. Furthermore, we plan to enhance our prototype implementation with the ability to dynamically adjust the number of versions that are run concurrently. This will ensure that multi-version applications will be able to consume all available resources (i.e., idle processor time) without affecting the overall system performance during peak load. Finally, we would like to be able to transparently run multi-version applications on multiple underlying platforms, ranging from multicore processors to large-scale data centers. This requires the ability to span our virtualized environment across multiple logical as well as physical nodes. In particular, we aim to include the possibility of executing certain versions of an application remotely, to enable scenarios with hundreds or even thousands of application versions. V. RELATED WORK The idea of concurrently running multiple versions (or a multi-version execution) of the same application was first explored in the context of N-version programming, a software development approach introduced in the 1970s in which multiple teams of programmers independently develop functionally equivalent versions of the same program in order to minimize the risk of having the same bugs in all versions [7]. During runtime, these versions are executed in parallel and majority voting is used to continue in the best possible way when a divergence occurs. Recently, several researchers have proposed techniques that apply a form of N-version programming based on automatically generated software variants [3], [6], [9], [11], [18], [21], [24], [25]. For example, DieHard [3] uses heap over-provisioning and full randomization of object placement and memory reuse to run multiple replicas and reduce the likelihood that a memory error will have any effect; and Orchestra [18] runs two versions of the same application with stacks growing in opposite directions, and synchronizes their execution at the level of system calls, raising an alarm if any divergence is detected, which would have been triggered by a stack-based buffer overflow attack. Cox et al. [9] propose a general framework for increasing application security by running in parallel several automatically generated diversified variants of the same program. The technique was implemented in two prototypes, one which runs the variants on different machines, and one which runs them on the same machine and synchronizes them at the system call level, using a modified Linux kernel. There are two key differences between our approach and previous work in this space. First, we do not rely on automatically generated variants, but instead propose to use existing software versions as a mechanism for improving software updates. This also means that as opposed to previous solutions, the versions run in parallel are not semantically equivalent this eliminates the challenge of generating diversified variants and creates opportunities in terms of recovery from failures, but also introduces additional challenges in terms of synchronizing the execution of the different versions. Second, while previous work has focused on detecting divergences, our key concern is to survive them, in order to increase the reliability, availability, and security of the overall application. Research on surviving software failures has received a lot of attention in the past [5], [8], [15] [17], [19], [22], and our proposed approach can benefit from the techniques developed in this context. Previous work on improving the software update process has looked at different aspects related to managing and deploying new software versions. For example, Beattie et al. [2] has looked at the issue of timing the application of security updates, while Crameri et al. [10] has proposed a framework for staged deployment, in which user machines are clustered according to their environment and software updates are tested across clusters using several different strategies. In relation to this work, our approach encourages users to always apply a software update, but it would still benefit from effective update strategies in order to decide what versions to keep when resources are limited. Many large-scale services, such as Facebook and Flickr use a continuous deployment approach, where new versions are continuously released to users [12], [14], but each version is often made accessible only to a fraction of users to prevent complete outage in case of newly introduced errors. While this approach helps minimize the number of users affected by new bugs, certain bugs may manifest themselves only following prolonged operation, after the release has been deployed to the entire user base. We believe our proposed approach is complementary to continuous deployment, and could be effectively combined with it. VI. CONCLUSION Software updates are an integral part of the software development and maintenance process, but unfortunately they present a high risk, as new releases often introduce new bugs and security vulnerabilities. In this position paper, we have argued for a new way of performing software updates, in which the new version of an application is run in parallel with old application versions, in order to increase the reliability and security of the overall system. We believe that multi-version software updates can have a significant impact on current software engineering
5 practices, by allowing frequent software updates without sacrificing the stability and security of older versions. ACKNOWLEDGMENTS We would like to thank Paolo Costa, Peter Pietzuch and Alex Wolf for discussions on multiplicity computing [4], and their feedback on the text. Petr Hosek s doctoral studies are supported by a Google European PhD Fellowship. REFERENCES [1] L. A. Barroso and U. Hölzle, The case for energyproportional computing, Computer, vol. 40, pp , [2] S. Beattie, S. Arnold, C. Cowan, P. Wagle, and C. Wright, Timing the application of security patches for optimal uptime, in Proceedings of the 16th USENIX Conference on System Administration (LISA 02), Nov [3] E. D. Berger and B. G. Zorn, Diehard: probabilistic memory safety for unsafe languages, in Proc. of the Conference on Programing Language Design and Implementation (PLDI 06), Jun [4] C. Cadar, P. Pietzuch, and A. L. Wolf, Multiplicity computing: A vision of software engineering for next-generation computing platform applications, in Proceedings of the FSE/SDP workshop on the Future of Software Engineering Research (FoSER 10), Nov [5] G. Candea, S. Kawamoto, Y. Fujiki, G. Friedman, and A. Fox, Microreboot a technique for cheap recovery, in Proc. of the 6th USENIX Symposium on Operating Systems Design and Implementation (OSDI 04), Dec [6] R. Capizzi, A. Long, V. Venkatakrishnan, and A. P. Sistla, Preventing information leaks through shadow executions, in Proc. of the 24th Annual Computer Security Applications Conference (ACSAC 08), Dec [7] L. Chen and A. Avizienis, N-version programming: A faulttolerance approach to reliability of software operation, in Proc. of the 8th IEEE International Symposium on Fault Tolerant Computing (FTCS 78), Jun [8] M. Costa, J. Crowcroft, M. Castro, A. Rowstron, L. Zhou, L. Zhang, and P. Barham, Vigilante: end-to-end containment of Internet worms, in Proc. of the 20th ACM Symposium on Operating Systems Principles (SOSP 05), Oct [9] B. Cox, D. Evans, A. Filipi, J. Rowanhill, W. Hu, J. Davidson, J. Knight, A. Nguyen-Tuong, and J. Hiser, N-variant systems: a secretless framework for security through diversity, in Proc. of the 15th USENIX Security Symposium (USENIX Security 06), Jul.-Aug [10] O. Crameri, N. Knezevic, D. Kostic, R. Bianchini, and W. Zwaenepoel, Staged deployment in Mirage, an integrated software upgrade testing and distribution system, in Proc. of the 21st ACM Symposium on Operating Systems Principles (SOSP 07), Oct [11] D. Devries and F. Piessens, Noninterference through secure multi-execution, in Proc. of the IEEE Symposium on Security and Privacy (IEEE S&P 10), May [12] R. Harmess, Flipping out, 12/02/flipping-out/, [13] P. Hosek and C. Cadar, Safe software updates via multiversion execution, Imperial College London, Tech. Rep. DTR11-13, Nov [14] R. Johnson, OOPSLA keynote: Moving fast at scale - lessons learned at Facebook, in Proc. of the 24th Annual Conference on Object-Oriented Programming Systems, Languages and Applications (OOPSLA 09), Oct [15] J. H. Perkins, S. Kim, S. Larsen, S. Amarasinghe, J. Bachrach, M. Carbin, C. Pacheco, F. Sherwood, S. Sidiroglou, G. Sullivan, W. F. Wong, Y. Zibin, M. D. Ernst, and M. Rinard, Automatically patching errors in deployed software, in Proc. of the 22nd ACM Symposium on Operating Systems Principles (SOSP 09), Oct [16] F. Qin, J. Tucek, J. Sundaresan, and Y. Zhou, Rx: treating bugs as allergies a safe method to survive software failures, in Proc. of the 20th ACM Symposium on Operating Systems Principles (SOSP 05), Oct [17] M. Rinard, C. Cadar, D. Dumitran, D. M. Roy, T. Leu, and J. William S. Beebee, Enhancing server availability and security through failure-oblivious computing, in Proc. of the 6th USENIX Symposium on Operating Systems Design and Implementation (OSDI 04), Dec [18] B. Salamat, T. Jackson, A. Gal, and M. Franz, Orchestra: intrusion detection using parallel execution and monitoring of program variants in user-space, in Proc. of the 4th European Conference on Computer Systems (EuroSys 09), Mar.-Apr [19] S. Sidiroglou and A. D. Keromytis, Execution transactions for defending against software failures: Use and evaluation, Springer International Journal of Information Security (IJIS), vol. 5, no. 2, pp , [20] R. L. Sites, A. Chernoff, M. B. Kirk, M. P. Marks, and S. G. Robinson, Binary translation, Communications of the Association for Computing Machinery (CACM), vol. 36, pp , [21] O. Trachsel and T. R. Gross, Variant-based competitive parallel execution of sequential programs, in Proc. of the 7th ACM International Conference on Computing Frontiers (CF 10), May [22] K. Veeraraghavan, P. M. Chen, J. Flinn, and S. Narayanasamy, Detecting and surviving data races using complementary schedules, in Proc. of the 23rd ACM Symposium on Operating Systems Principles (SOSP 11), Oct [23] A. Whitaker, M. Shaw, and S. D. Gribble, Scale and performance in the Denali isolation kernel, in Proc. of the 5th USENIX Symposium on Operating Systems Design and Implementation (OSDI 02), Dec [24] H. Xue, N. Dautenhahn, and S. T. King, Using replicated execution for a more secure and reliable web browser, in Proc. of the 19th Network and Distributed System Security Symposium (NDSS 12), Feb [25] A. R. Yumerefendi, B. Mickle, and L. P. Cox, Tightlip: Keeping applications from spilling the beans, in Proc. of the 4th USENIX Symposium on Networked Systems Design and Implementation (NSDI 07), Apr
N-Variant Systems. Slides extracted from talk by David Evans. (provenance in footer) http://www.cs.virginia.edu/evans/sdwest
1 N-Variant Systems Slides extracted from talk by David Evans (provenance in footer) 2 Inevitability of Failure Despite all the best efforts to build secure software, we will still fail (or have to run
Virtualization for Cloud Computing
Virtualization for Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF CLOUD COMPUTING On demand provision of computational resources
Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB
Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB Executive Summary Oracle Berkeley DB is used in a wide variety of carrier-grade mobile infrastructure systems. Berkeley DB provides
Task Scheduling in Hadoop
Task Scheduling in Hadoop Sagar Mamdapure Munira Ginwala Neha Papat SAE,Kondhwa SAE,Kondhwa SAE,Kondhwa Abstract Hadoop is widely used for storing large datasets and processing them efficiently under distributed
International Journal of Computer Science and Network (IJCSN) Volume 1, Issue 5, October 2012 www.ijcsn.org ISSN 2277-5420. Bhopal, M.P.
Prevention of Buffer overflow Attack Blocker Using IDS 1 Pankaj B. Pawar, 2 Malti Nagle, 3 Pankaj K. Kawadkar Abstract 1 PIES Bhopal, RGPV University, 2 PIES Bhopal, RGPV University, 3 PIES Bhopal, RGPV
PARALLELS CLOUD STORAGE
PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...
The EMSX Platform. A Modular, Scalable, Efficient, Adaptable Platform to Manage Multi-technology Networks. A White Paper.
The EMSX Platform A Modular, Scalable, Efficient, Adaptable Platform to Manage Multi-technology Networks A White Paper November 2002 Abstract: The EMSX Platform is a set of components that together provide
Relational Databases in the Cloud
Contact Information: February 2011 zimory scale White Paper Relational Databases in the Cloud Target audience CIO/CTOs/Architects with medium to large IT installations looking to reduce IT costs by creating
Statement of Support on Shared File System Support for Informatica PowerCenter High Availability Service Failover and Session Recovery
Statement of Support on Shared File System Support for Informatica PowerCenter High Availability Service Failover and Session Recovery Applicability This statement of support applies to the following Informatica
How To Secure Cloud Computing
Resilient Cloud Services By Hemayamini Kurra, Glynis Dsouza, Youssif Al Nasshif, Salim Hariri University of Arizona First Franco-American Workshop on Cybersecurity 18 th October, 2013 Presentation Outline
What is going on in Operating Systems Research: The OSDI & SOSP Perspective. Dilma M. da Silva IBM TJ Watson Research Center, NY [email protected].
What is going on in Operating Systems Research: The OSDI & SOSP Perspective Dilma M. da Silva IBM TJ Watson Research Center, NY [email protected] 16 July 2006 Slide 2 Main OS conferences OSDI Operating
ORACLE DATABASE 10G ENTERPRISE EDITION
ORACLE DATABASE 10G ENTERPRISE EDITION OVERVIEW Oracle Database 10g Enterprise Edition is ideal for enterprises that ENTERPRISE EDITION For enterprises of any size For databases up to 8 Exabytes in size.
Distributed Systems. Distributed Systems
Distributed Systems Prof. Steve Wilbur Department of Computer Science University College London 1 Distributed Systems... use of more than one computer connected by communications links to carry out a computational
A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM
A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM Sneha D.Borkar 1, Prof.Chaitali S.Surtakar 2 Student of B.E., Information Technology, J.D.I.E.T, [email protected] Assistant Professor, Information
CIA Lab Assignment: Web Servers
CIA Lab Assignment: Web Servers A. Bakker N. Sijm C. Dumitru J. van der Ham Feedback deadline: October 17, 2014 10:00 CET Abstract Web servers are an important way of putting information out on the Internet
Virtualization. Jukka K. Nurminen 23.9.2015
Virtualization Jukka K. Nurminen 23.9.2015 Virtualization Virtualization refers to the act of creating a virtual (rather than actual) version of something, including virtual computer hardware platforms,
Objectives. Chapter 2: Operating-System Structures. Operating System Services (Cont.) Operating System Services. Operating System Services (Cont.
Objectives To describe the services an operating system provides to users, processes, and other systems To discuss the various ways of structuring an operating system Chapter 2: Operating-System Structures
Chapter 1 - Web Server Management and Cluster Topology
Objectives At the end of this chapter, participants will be able to understand: Web server management options provided by Network Deployment Clustered Application Servers Cluster creation and management
Windows Server 2003 migration: Your three-phase action plan to reach the finish line
WHITE PAPER Windows Server 2003 migration: Your three-phase action plan to reach the finish line Table of contents Executive summary...2 Windows Server 2003 and the big migration question...3 If only migration
Microsoft s Advantages and Goals for Hyper-V for Server 2016
Virtualization is a bedrock of modern cloud environments. Hypervisors manage the virtual machines in a cloud environments, providing six fundamental features, as shown in the table below. 1 Hypervisors
Virtualization Technologies (ENCS 691K Chapter 3)
Virtualization Technologies (ENCS 691K Chapter 3) Roch Glitho, PhD Associate Professor and Canada Research Chair My URL - http://users.encs.concordia.ca/~glitho/ The Key Technologies on Which Cloud Computing
Chapter 2 Addendum (More on Virtualization)
Chapter 2 Addendum (More on Virtualization) Roch Glitho, PhD Associate Professor and Canada Research Chair My URL - http://users.encs.concordia.ca/~glitho/ More on Systems Virtualization Type I (bare metal)
Distributed and Cloud Computing
Distributed and Cloud Computing K. Hwang, G. Fox and J. Dongarra Chapter 3: Virtual Machines and Virtualization of Clusters and datacenters Adapted from Kai Hwang University of Southern California March
VMware Server 2.0 Essentials. Virtualization Deployment and Management
VMware Server 2.0 Essentials Virtualization Deployment and Management . This PDF is provided for personal use only. Unauthorized use, reproduction and/or distribution strictly prohibited. All rights reserved.
MassTransit vs. FTP Comparison
MassTransit vs. Comparison If you think is an optimal solution for delivering digital files and assets important to the strategic business process, think again. is designed to be a simple utility for remote
Keywords Distributed Computing, On Demand Resources, Cloud Computing, Virtualization, Server Consolidation, Load Balancing
Volume 5, Issue 1, January 2015 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Survey on Load
CHAPTER 2 MODELLING FOR DISTRIBUTED NETWORK SYSTEMS: THE CLIENT- SERVER MODEL
CHAPTER 2 MODELLING FOR DISTRIBUTED NETWORK SYSTEMS: THE CLIENT- SERVER MODEL This chapter is to introduce the client-server model and its role in the development of distributed network systems. The chapter
Performance Oriented Management System for Reconfigurable Network Appliances
Performance Oriented Management System for Reconfigurable Network Appliances Hiroki Matsutani, Ryuji Wakikawa, Koshiro Mitsuya and Jun Murai Faculty of Environmental Information, Keio University Graduate
Fine-Grained User-Space Security Through Virtualization. Mathias Payer and Thomas R. Gross ETH Zurich
Fine-Grained User-Space Security Through Virtualization Mathias Payer and Thomas R. Gross ETH Zurich Motivation Applications often vulnerable to security exploits Solution: restrict application access
Enhancing Data Security in Cloud Storage Auditing With Key Abstraction
Enhancing Data Security in Cloud Storage Auditing With Key Abstraction 1 Priyadharshni.A, 2 Geo Jenefer.G 1 Master of engineering in computer science, Ponjesly College of Engineering 2 Assistant Professor,
The MOSIX Cluster Management System for Distributed Computing on Linux Clusters and Multi-Cluster Private Clouds
The MOSIX Cluster Management System for Distributed Computing on Linux Clusters and Multi-Cluster Private Clouds White Paper A. Barak and A. Shiloh http://www.mosix.org OVERVIEW MOSIX 1 is a cluster management
Blackboard Collaborate Web Conferencing Hosted Environment Technical Infrastructure and Security
Overview Blackboard Collaborate Web Conferencing Hosted Environment Technical Infrastructure and Security Blackboard Collaborate web conferencing is available in a hosted environment and this document
I Control Your Code Attack Vectors Through the Eyes of Software-based Fault Isolation. Mathias Payer, ETH Zurich
I Control Your Code Attack Vectors Through the Eyes of Software-based Fault Isolation Mathias Payer, ETH Zurich Motivation Applications often vulnerable to security exploits Solution: restrict application
Load Testing and Monitoring Web Applications in a Windows Environment
OpenDemand Systems, Inc. Load Testing and Monitoring Web Applications in a Windows Environment Introduction An often overlooked step in the development and deployment of Web applications on the Windows
REMOTE DEVELOPMENT OPTION
Leading the Evolution DATA SHEET MICRO FOCUS SERVER EXPRESS TM REMOTE DEVELOPMENT OPTION Executive Overview HIGH PRODUCTIVITY DEVELOPMENT FOR LINUX AND UNIX DEVELOPERS Micro Focus Server Express is the
High Availability Design Patterns
High Availability Design Patterns Kanwardeep Singh Ahluwalia 81-A, Punjabi Bagh, Patiala 147001 India [email protected] +91 98110 16337 Atul Jain 135, Rishabh Vihar Delhi 110092 India [email protected]
Red Hat Enterprise linux 5 Continuous Availability
Red Hat Enterprise linux 5 Continuous Availability Businesses continuity needs to be at the heart of any enterprise IT deployment. Even a modest disruption in service is costly in terms of lost revenue
Data Model Bugs. Ivan Bocić and Tevfik Bultan
Data Model Bugs Ivan Bocić and Tevfik Bultan Department of Computer Science University of California, Santa Barbara, USA [email protected] [email protected] Abstract. In today s internet-centric world, web
CPET 581 Cloud Computing: Technologies and Enterprise IT Strategies. Virtualization of Clusters and Data Centers
CPET 581 Cloud Computing: Technologies and Enterprise IT Strategies Lecture 4 Virtualization of Clusters and Data Centers Text Book: Distributed and Cloud Computing, by K. Hwang, G C. Fox, and J.J. Dongarra,
Centralized Systems. A Centralized Computer System. Chapter 18: Database System Architectures
Chapter 18: Database System Architectures Centralized Systems! Centralized Systems! Client--Server Systems! Parallel Systems! Distributed Systems! Network Types! Run on a single computer system and do
Mobile Cloud Computing for Data-Intensive Applications
Mobile Cloud Computing for Data-Intensive Applications Senior Thesis Final Report Vincent Teo, [email protected] Advisor: Professor Priya Narasimhan, [email protected] Abstract The computational and storage
The Business Case Migration to Windows Server 2012 R2 with Lenovo Servers
The Business Case Migration to Windows Server 2012 R2 with Lenovo Servers New levels of integration and capabilities provide the foundation for building more successful businesses with this new infrastructure
Virtual Machines and Security Paola Stone Martinez East Carolina University November, 2013.
Virtual Machines and Security Paola Stone Martinez East Carolina University November, 2013. Keywords: virtualization, virtual machine, security. 1. Virtualization The rapid growth of technologies, nowadays,
International Journal of Computer Science Trends and Technology (IJCST) Volume 3 Issue 3, May-June 2015
RESEARCH ARTICLE OPEN ACCESS Ensuring Reliability and High Availability in Cloud by Employing a Fault Tolerance Enabled Load Balancing Algorithm G.Gayathri [1], N.Prabakaran [2] Department of Computer
How To Test A Web Server
Performance and Load Testing Part 1 Performance & Load Testing Basics Performance & Load Testing Basics Introduction to Performance Testing Difference between Performance, Load and Stress Testing Why Performance
Chap 1. Introduction to Software Architecture
Chap 1. Introduction to Software Architecture 1. Introduction 2. IEEE Recommended Practice for Architecture Modeling 3. Architecture Description Language: the UML 4. The Rational Unified Process (RUP)
Chapter 6, The Operating System Machine Level
Chapter 6, The Operating System Machine Level 6.1 Virtual Memory 6.2 Virtual I/O Instructions 6.3 Virtual Instructions For Parallel Processing 6.4 Example Operating Systems 6.5 Summary Virtual Memory General
Virtual Machine Monitors. Dr. Marc E. Fiuczynski Research Scholar Princeton University
Virtual Machine Monitors Dr. Marc E. Fiuczynski Research Scholar Princeton University Introduction Have been around since 1960 s on mainframes used for multitasking Good example VM/370 Have resurfaced
Comparison of Request Admission Based Performance Isolation Approaches in Multi-tenant SaaS Applications
Comparison of Request Admission Based Performance Isolation Approaches in Multi-tenant SaaS Applications Rouven Kreb 1 and Manuel Loesch 2 1 SAP AG, Walldorf, Germany 2 FZI Research Center for Information
Enhancing the Scalability of Virtual Machines in Cloud
Enhancing the Scalability of Virtual Machines in Cloud Chippy.A #1, Ashok Kumar.P #2, Deepak.S #3, Ananthi.S #4 # Department of Computer Science and Engineering, SNS College of Technology Coimbatore, Tamil
Cloud Computing. Chapter 1 Introducing Cloud Computing
Cloud Computing Chapter 1 Introducing Cloud Computing Learning Objectives Understand the abstract nature of cloud computing. Describe evolutionary factors of computing that led to the cloud. Describe virtualization
Designing a Cloud Storage System
Designing a Cloud Storage System End to End Cloud Storage When designing a cloud storage system, there is value in decoupling the system s archival capacity (its ability to persistently store large volumes
A New Approach to Nuclear Computer Security
A New Approach to Nuclear Computer Security By George Chamales The current, attack centric approach to computer security is incapable of adequately defending nuclear facilities. This paper introduces a
ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective
ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective Part II: Data Center Software Architecture: Topic 1: Distributed File Systems Finding a needle in Haystack: Facebook
Deployment Pattern. Youngsu Son 1,JiwonKim 2, DongukKim 3, Jinho Jang 4
Deployment Pattern Youngsu Son 1,JiwonKim 2, DongukKim 3, Jinho Jang 4 Samsung Electronics 1,2,3, Hanyang University 4 alroad.son 1, jiwon.ss.kim 2, dude.kim 3 @samsung.net, [email protected] 4
Automatically Patching Errors in Deployed Software
Automatically Patching Errors in Deployed Software Jeff H. Perkins α, Sunghun Kim β, Sam Larsen γ, Saman Amarasinghe α, Jonathan Bachrach α, Michael Carbin α, Carlos Pacheco δ, Frank Sherwood, Stelios
Network Attached Storage. Jinfeng Yang Oct/19/2015
Network Attached Storage Jinfeng Yang Oct/19/2015 Outline Part A 1. What is the Network Attached Storage (NAS)? 2. What are the applications of NAS? 3. The benefits of NAS. 4. NAS s performance (Reliability
MINIMIZING STORAGE COST IN CLOUD COMPUTING ENVIRONMENT
MINIMIZING STORAGE COST IN CLOUD COMPUTING ENVIRONMENT 1 SARIKA K B, 2 S SUBASREE 1 Department of Computer Science, Nehru College of Engineering and Research Centre, Thrissur, Kerala 2 Professor and Head,
The functionality and advantages of a high-availability file server system
The functionality and advantages of a high-availability file server system This paper discusses the benefits of deploying a JMR SHARE High-Availability File Server System. Hardware and performance considerations
COS 318: Operating Systems. Virtual Machine Monitors
COS 318: Operating Systems Virtual Machine Monitors Andy Bavier Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall10/cos318/ Introduction Have been around
A Framework for Highly Available Services Based on Group Communication
A Framework for Highly Available Services Based on Group Communication Alan Fekete [email protected] http://www.cs.usyd.edu.au/ fekete Department of Computer Science F09 University of Sydney 2006,
Operating System for the K computer
Operating System for the K computer Jun Moroo Masahiko Yamada Takeharu Kato For the K computer to achieve the world s highest performance, Fujitsu has worked on the following three performance improvements
Building Energy Security Framework
Building Energy Security Framework Philosophy, Design, and Implementation Building Energy manages multiple subsets of customer data. Customers have strict requirements for regulatory compliance, privacy
How do Users and Processes interact with the Operating System? Services for Processes. OS Structure with Services. Services for the OS Itself
How do Users and Processes interact with the Operating System? Users interact indirectly through a collection of system programs that make up the operating system interface. The interface could be: A GUI,
GUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR
GUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR ANKIT KUMAR, SAVITA SHIWANI 1 M. Tech Scholar, Software Engineering, Suresh Gyan Vihar University, Rajasthan, India, Email:
Client/Server Computing Distributed Processing, Client/Server, and Clusters
Client/Server Computing Distributed Processing, Client/Server, and Clusters Chapter 13 Client machines are generally single-user PCs or workstations that provide a highly userfriendly interface to the
HeapStats: Your Dependable Helper for Java Applications, from Development to Operation
: Technologies for Promoting Use of Open Source Software that Contribute to Reducing TCO of IT Platform HeapStats: Your Dependable Helper for Java Applications, from Development to Operation Shinji Takao,
Energy Efficient MapReduce
Energy Efficient MapReduce Motivation: Energy consumption is an important aspect of datacenters efficiency, the total power consumption in the united states has doubled from 2000 to 2005, representing
Prospect 365 CRM Installation Requirements. Technical Document
Prospect 365 CRM Installation Requirements Technical Document Prospect 365 Software Prospect 365 is a cloud-based solution and for the browser-based components there is no installation required (just minimum
Chapter 2 TOPOLOGY SELECTION. SYS-ED/ Computer Education Techniques, Inc.
Chapter 2 TOPOLOGY SELECTION SYS-ED/ Computer Education Techniques, Inc. Objectives You will learn: Topology selection criteria. Perform a comparison of topology selection criteria. WebSphere component
IBM Security QRadar Vulnerability Manager Version 7.2.6. User Guide IBM
IBM Security QRadar Vulnerability Manager Version 7.2.6 User Guide IBM Note Before using this information and the product that it supports, read the information in Notices on page 91. Product information
supported Application QoS in Shared Resource Pools
Supporting Application QoS in Shared Resource Pools Jerry Rolia, Ludmila Cherkasova, Martin Arlitt, Vijay Machiraju HP Laboratories Palo Alto HPL-2006-1 December 22, 2005* automation, enterprise applications,
Virtual Machine Based Resource Allocation For Cloud Computing Environment
Virtual Machine Based Resource Allocation For Cloud Computing Environment D.Udaya Sree M.Tech (CSE) Department Of CSE SVCET,Chittoor. Andra Pradesh, India Dr.J.Janet Head of Department Department of CSE
Cloud Based Distributed Databases: The Future Ahead
Cloud Based Distributed Databases: The Future Ahead Arpita Mathur Mridul Mathur Pallavi Upadhyay Abstract Fault tolerant systems are necessary to be there for distributed databases for data centers or
WHAT WE NEED TO START THE PERFORMANCE TESTING?
ABSTRACT Crystal clear requirements before starting an activity are always helpful in achieving the desired goals. Achieving desired results are quite difficult when there is vague or incomplete information
Cloud Computing. Up until now
Cloud Computing Lecture 11 Virtualization 2011-2012 Up until now Introduction. Definition of Cloud Computing Grid Computing Content Distribution Networks Map Reduce Cycle-Sharing 1 Process Virtual Machines
Avoid a single point of failure by replicating the server Increase scalability by sharing the load among replicas
3. Replication Replication Goal: Avoid a single point of failure by replicating the server Increase scalability by sharing the load among replicas Problems: Partial failures of replicas and messages No
Building highly available systems in Erlang. Joe Armstrong
Building highly available systems in Erlang Joe Armstrong How can we get 10 nines reliability? Why Erlang? Erlang was designed to program fault-tolerant systems Overview n Types of HA systems n Architecture/Algorithms
Tools Page 1 of 13 ON PROGRAM TRANSLATION. A priori, we have two translation mechanisms available:
Tools Page 1 of 13 ON PROGRAM TRANSLATION A priori, we have two translation mechanisms available: Interpretation Compilation On interpretation: Statements are translated one at a time and executed immediately.
This paper defines as "Classical"
Principles of Transactional Approach in the Classical Web-based Systems and the Cloud Computing Systems - Comparative Analysis Vanya Lazarova * Summary: This article presents a comparative analysis of
Load Distribution in Large Scale Network Monitoring Infrastructures
Load Distribution in Large Scale Network Monitoring Infrastructures Josep Sanjuàs-Cuxart, Pere Barlet-Ros, Gianluca Iannaccone, and Josep Solé-Pareta Universitat Politècnica de Catalunya (UPC) {jsanjuas,pbarlet,pareta}@ac.upc.edu
Chapter 18: Database System Architectures. Centralized Systems
Chapter 18: Database System Architectures! Centralized Systems! Client--Server Systems! Parallel Systems! Distributed Systems! Network Types 18.1 Centralized Systems! Run on a single computer system and
The Key Technology Research of Virtual Laboratory based On Cloud Computing Ling Zhang
International Conference on Advances in Mechanical Engineering and Industrial Informatics (AMEII 2015) The Key Technology Research of Virtual Laboratory based On Cloud Computing Ling Zhang Nanjing Communications
