Hardware Sizing Method for Information System in Korea
|
|
|
- Clare Pope
- 10 years ago
- Views:
Transcription
1 Hardware Sizing Method for Information System in Korea Jonghei Ra 1, Haeyong Jung 2 and Kwangdon Choi 3 1 Department of e-business, Gwangju University [email protected] 2 Department of MIS, Nazarene University [email protected] 3 Department of e-business, Hansei University [email protected] Abstract Accurate system sizing are essential for higher efficiency of investment. Accurate system sizing benefits are generally viewed in terms of the avoidance of excess equipment and lost opportunity costs by not being able to support business needs. We are proposed calculating method for hardware components that is CPU, memory and system, data disk according to the application system types Keyword: Hardware Sizing, System Performance, Information System 1. Introduction The introduction of information system in public sector and private sector regardless of its organization characteristic is recognized as necessary infrastructure. The investment on information system is continuously expanding in the public sector for the improvement of productivity and the level of service in Korea. In addition, rapid transitions are taking place in the public sector from mainframe to client/server, Internet, Intranet, which resulted in putting more importance on the management of system performance and capacity management due to the complexity and increasing usage of the system. In other words, it is very important to calculate the required hardware resources accurately prior to the introduction of the system, because failure of managing the system performance and capacity sizing can result in high cost and resources waste and lower the productivity and cause distrust and credibility breakdown due to the poor service [1,2, 6,13]. According to report of Korean government, the level of resource utilization is very low, especially utilization of CPU is 46% on average, which stems from the lack of accurate resource capacity sizing before introducing information system. However, it is not easy to calculate the appropriate resource capacity sizing, because the resource capacity of information system should be determined based on business characteristic, estimated workload increasing rate, level of system usage of end users, technical architecture [5, 9, 10]. In general, resource capacity sizing of information system has been conducted by system suppliers or internal resources based on the non-standardized methodology, even though the cost of hardware generally takes up 30%~50% of total project cost. Particularly there is no standardized methodology on resource capacity sizing and CPU 33
2 performance evaluation, which is controversial when the public organizations consider building their own information system. Also the price of server varies depending on the CPU spec. Therefore, it is necessary to establish a methodology for required resources capacity for each project of the public organization, just like there is standard for software quotation for the public organization project [7, 8]. We proposed sizing methodology based on the empirical analysis. The empirical analysis has been conducted based on the survey by working level experts in the public organizations and debate with groups of experts using existing capacity sizing framework and hardware capacity sizing guidelines. This paper is organized as follows. Section 2 introduces related works, Section 3 presents the research design, Section 4 provides hardware sizing method, and Section 5 concludes the paper with some future works. 2. Related works We normally use different terminology for the identical implication of hardware resource capacity calculation, such as Capacity Planning, Capacity Sizing, and System Sizing, and there are few differences among them. System capacity sizing determines system requirements such as CPU type or number, Disk type or volume and memory volume based on the concepts defined by organizations such as TPC(Transaction Processing Performance Council)[12], SPEC(Standard Performance Evaluation Corporation)[11] and IDEAs. In other words, the system capacity sizing uses mathematical methodology based on business process and applications, which is different from the one which decides the required capacity sizing decided by system architecture and application. Hardware sizing is studied when software vendor notify the most suitable hardware size for their package [9]. But there is no hardware sizing method to use for organization ordering the constructing IS, constituted with hardware and software. Many research for software sizing, such as LOC(Line Of Code), FP(Function Point), etc. are exist, but research for hardware sizing is scarcely. The existing structured studies for capacity sizing include developing tool for capacity sizing(1994), study for hardware capacity sizing(2002), capacity sizing technique and framework study for information system(2003), The capacity sizing study by information system size(2004) which studies were done by working level researchers in the National Computerization Agency of Korea. However, this study has the limitation as a general guideline due to the lack of credibility and objectivity [7]. New performance metric is used with tpmc (transactions per minute) and OPS(Operations Per Second)[3,4], But for example of CPU, the most important element of hardware sizing, tpmc, most used metric have three problems. First, there is the discrepancy of CPU capacity sizing criteria. Second, some hardware suppliers does not apply TPC performance standard. Third, it is not reflect the web based software characteristics. As a result of reviewing all the case studies and former studies, it is necessary to uphold objectivity and develop a concrete capacity sizing, and to differentiate the calculating methods based on business type and size. 34
3 3. Research design This study had empirical analysis with group of experts in the public sector and suppliers in order to obtain and generalize the appropriateness of capacity sizing method. In addition, this study established the criteria for analysis and interpretation through debate with group of experts and validated objectivity of method and items, which should be included in the capacity sizing formula. The detail hardware elements in this study are made up based on the reference of information from NCA, public organizations, System Integrator s as there was no capacity calculation formula in former research and standardized capacity formula. The detailed hardware capacity formula and elements for it are like Table 1. Hardware was major items to be validated for appropriateness of method, including CPU, memory, disk, and the objectivity validations has been done through survey by experts from operational level people and capacity sizing experts from the public organizations. Table 1. Items of Hardware Capacity Calculation and Survey Items Definition A B C CPU for WEB/ WAS server Number of concurrent users Number of concurrent users Load rate on interface Application interface load communicating with another correction server x Peak time load correction Load rate considering peak time load upon significantly increasing x x abnormal accesses Redundancy for stable System redundancy operation based on importance and urgency of business CPU for OLTP Memory Number of operations per user Number of operations per minute per person Number of concurrent users Number of concurrent users Number of transactions Number of transactions per minute per person Default tpmc correction Correction based on system size to run system under x optimized status Peak time load correction Correction considering peak time of load Database size correction Correction for the data size of transaction to be processed x Application complexity Correction for the complexity correction of program x Application architecture Correction for application x correction architecture x Application load correction Actual user operation x environmental correction x Network correction Correction for network bandwidth x Cluster correction Correction for outage under cluster environment x Redundancy correction Redundancy for unexpected situation and expansion Required memory space for System area OS, DBMS, Middleware, other utilities 35
4 System managed area Required memory space for x system operation x Number of users Number of users using system Required memory space for Required memory per user application, middleware, DBMS Buffer cash Buffer cash size to reduce Disk I/O x x Redundancy considering Cluster Correction increasing load for one system when the other system is down x x which is clustered together Redundancy correction Redundancy for unexpected situation and expansion System Disk System O/S area Disk space for O/S, system software Disk space for middleware, Application S/W area DBMS, application S/W, various utilities SWAP area Disk space for Swapping System Disk redundancy Disk space for safe data management Data Disk Data area Disk space for data store Disk space for copied file and Backup area database in case of hardware x outage RAID area Parity space for RAID disk Data disk redundancy Disk space for sage data management The measurement for capacity sizing objectivity had been conducted based on a scale of one to five by questionnaire. The pilot testing for survey was conducted by 4 experts in public organizations, 2 experts from system suppliers. The capacity calculation formula and importance of formula elements have been finalized based on the assessment criteria after debating among groups of experts. A population is composes of 2 groups including working level experts who have implemented and were operating the IS, and capacity calculation experts from suppliers. The working level people is limited to the public organizations in which they had rolled out and were running the IS, including government organizations founded by government organization law, local government, nonprofit government invested organizations and a government enterprises. As for suppliers experts survey we limited to the major 9 suppliers, which supplied servers to the public organizations. The distributed survey has returned across 5 areas from 61 participants from 29 government organizations and the 60 survey feedbacks had been used for survey analysis except one participant who has not yet had the IS in operation. As for suppliers experts survey, 44 survey feedbacks from 45 participants had been used for survey analysis. Those 45 participants consist of 5 from each supplier and SI enterprise lest we should overemphasize a specific enterprise. As for working experience, experts from the public organizations had 12.8 years of working experience on average, and been working for 4.38 years on average in their current job position. Experts from the supplier organizations had 9.8 years of working experience on average, and been working for 5.15 years on average in their current job position. The information systems for which 60 participants from government organization had implemented and were operating included internal use application which were 35 36
5 system taking up 58.3% and included the public service system which were 24 systems taking up 40%. With classification by system area, unit applications are 33 taking up 55%, and enterprise application were 27 taking up 45%. Table. 2 show the classification based on purpose of information system and information system building area. In addition, the average age of server currently being used by the participants was 2.3 years and 9 various servers being used models are shown in Table 3. Table 2. Classification by objective of information system and scope of system building Classification System Area Unit App. IT Infra. Total Objective system of Improve efficiency of business process 23 (38.3%) Original service 9 (15%) Both 1 (1.7%) - Total 33 (55%) 12 (20%) 15 (25%) 27 (45%) 35 (58.3%) 24 (40%) 1 (1.7%) 60 Table 3. Distribution of Server model from the survey participants of the public organization 4. Hardware sizing method H/W vendor Frequency Rate SUN Microsystems 20 33% IBM 13 22% HP 12 20% Fjuzitu 6 10% etc 9 15% Total % 4.1 Analysis on appropriateness and credibility The appropriateness validation is to review whether measuring tool measures the intended concept appropriately. The validity can be classified abused on the assessment method as content validity, criterion-related validity, constructs validity. This study rather generates the formula which is used to compute objectively the most optimized system capacity for CPU, Memory, Disk for information system building than is research which defines and measures the concept. Thus, conceptual validity is no longer valid as this study is intended to validate the validity of formula and formula elements shown in capacity sizing guide line written in This study assume that contents validity and sampling validity of this research has been confirmed as 2003's research created the capacity calculation formula based on experience of capacity sizing for the public organization information system building and based on the debate among groups of experts. The credibility analysis results performed by this study are over 0.6 of Cronbach's α value shown in Table 4. From the personal analysis point of view this score is not high enough, but it does not pose a problem, as it is not a conceptual 37
6 research. In addition, original analysis item is maintained as there is no big difference on value variation of Cronbach's α when comparing to without individual element. Table 4. Analysis Result of Credibility Classification Number of Cronbach's survey value questionnaire CPU for WEB/WAS 5 items CPU for OLTP server 12 items Memory 7 items System Disk 4 items Data Disk 4 items α 4.2 Calculating formula We use all items in Table 1 as initial elements of calculating formula, and then validate it by questionnaire. As criteria for importance of formula elements, first we excluded the elements from the formula which are below 3.5 on average from both the public organizations and suppliers, and second, even though average is 3.5 and above we reviewed to see if there is big difference between the public organizations' and suppliers'. If big difference exists then the elements were removed from the formula and if not, the elements were included in the formula. Sizing elements of hardware are chosen by the importance analysis described above. For example, WEB/WAS CPU has 5, OLTP CPU has 10, Memory has 4 and Data disk has 4 sizing elements. We conducted difference analysis between two groups, experts from public organizations and experts from suppliers, to ensure whether there are differences in recognizing the validity of calculating formula and the importance of each element between them. Table 5. Comparison between two groups of experts from the public organization and system supplier for importance of each element of capacity calculation formula of CPU for WEB/WAS server For example, CPU capacity sizing for WEB/WAS is Table 3. According to the survey analysis, appropriateness of formula is 3.70 on average, which exceeds standard of 3.5 established by expert groups and the formula turned out to be appropriate. On the 38
7 other hand, importance of 5 elements of formula for CPU capacity calculation is above 3.5 on average. According to the results of analysis reviewing any difference on the recognition for formula elements, it turned out that there is a slight difference on simultaneous number of users, number of operations per users between these two groups. However, as the elements importance is above 3.5 from these two those 5 elements are included in the formula. Below is the CPU capacity calculation formula for WEB/WAS server based on the survey result. CPU Sizing = Number of simultaneous users * (application load correction + peak time load correction) * system reserve rate * Number of operations per user According to the CPU sizing for OLTP survey result analysis, the average of survey is 3.79, which exceeds standard of 3.5 established by group of experts, and it turned out to be appropriate. Importance of individual elements being used for CPU capacity calculation formula for OLTP is 3.5 and above on average for all 12 elements, such as number of processed transactions, number of simultaneous users, peak time correction, and reserve rate correction. According to the analysis seeing if there were any differences in 6 elements between the public organizations and system suppliers, including number of simultaneous users, number of processed transactions, user complexity correction, application architecture correction, it turned out that there is quite a mount of difference. Below is the CPU capacity calculation formula for OLTP server based on the survey result. CPU Sizing = {(Number of simultaneous users * Number of transactions processed) * (default tpmc correction + Peak Time correction + database size correction + application complexity correction + application load correction + network correction + cluster correction)} * reserve rate According to the analysis of memory sizing survey results, the average of survey is 3.70, which exceed the standard of 3.5 established by group of experts, and it turned out to be appropriate. Importance of 7 elements, such as required memory per user, OS managed area, reserve rate, buffer cash, cluster correction which is used for memory calculation formula is above 3.5 in 5 elements, but importance of O/S managed area is 3.44 which is below standard of 3.5. Thus, the element of O/S managed area is excluded from the formula. 39
8 Table 6. Comparison between two groups of experts from the public organization and system supplier for importance of each element of capacity calculation formula of CPU for OLTP server Table 7. Comparison between two groups of experts from the public organization and system supplier for importance of each element of capacity calculation formula of memory According to the analysis to see if there is any difference between the public organizations and system suppliers, it turned out that there was significant difference on reserve rate and cluster correction. Importance of reserve rate is 3.5, and importance of cluster correction is 3.4, which is below standard of 3.5, which is excluded from the Memory Sizing = {system area + (number of users)} * buffer cash * reserve rate formula. Therefore, it is determined to include 5 elements except O/S managed area and cluster correction. Below is the Memory capacity sizing formula based on the survey result. 40
9 According to the analysis of system disk sizing survey results, the appropriateness of capacity calculation formula on average from the survey is 3.77, which exceeds the standard of 3.5 established by group of experts, and this formula turned out to be appropriate. Importance of 4 elements, such as application S/W area, reserve rate, system O/S area, SAWP is 3.5 and above on average. According to the analysis to see if there is any difference between the public organizations and system suppliers, it turned out that there is quite amount of difference on both application S/W area and reserve rate. Table 8. Comparison between two groups of experts from the public organization and system supplier for importance of each element of capacity calculation formula of system disk However, importance of 2 elements is determined to be included in the formula as those two from the public organization's response is 3.5 and above. Below is the system Disk capacity calculation formula based on the above survey results. System Disk sizing = (application S/W area + system O/S area + SWAP area) * reserve rate According to the analysis of data disk sizing survey results, the appropriateness of capacity calculation formula on average from the survey is 3.78 which exceeds the standard of 3.5 established by a group of experts, and this formula turned out to be appropriate. Importance of 4 elements, such as Data area, reserve rate, backup area, reserve RAID rate is 3.5 and above on average. According to the analysis to see if there is any difference between the public organizations and system suppliers on recognition view point, it turned out that there is no significant difference on 4 elements so these 4 elements are determined to be included in the formula. Below is the Data Disk capacity calculation formula based on the above analysis results. 41
10 Table 9. Comparison between two groups of experts from the public organization and system supplier for importance of each element of capacity calculation formula of system disk 5. Conclusions In this paper, we made efforts to increase appropriate budget execution and investment efficiency of public organizations conducting IS project. For this end, we proposed hardware sizing method and conducted research for empirically extracting capacity calculation formula of elements such as CPU, memory and disk. The significance of this study is that the validation was conducted through a survey and analysis of experts from both the public organizations and system suppliers to obtain appropriateness of existing capacity calculation formula. We also plan to make integration guideline of IS sizing with software as well as hardware. For this purpose, methods for linking related criteria, hardware sizing, software cost and maintenance cost, and integrating them would be required. 6. References [1] Canturk I., Apler B., Margaert M., Long-term Workload phases : Duration Prediction and Application to DVFS, IEEE Micro, (2005) [2] Carrington, L., Sanvely, A., Wolter, N: A Performance Prediction Framework for Science Application, Future Generation Computer System, Vol. 22, Issue 3, (2006) [3] Compaq, Sizing a thin client Server Computing Solution Deploying Compaq ProLiant DL series Servers, (2001). [4] Daniel A. Menasc, Capacity Planning for Web Performance :metrics, models, and methods, Prentice Hall, (1998). [5] Doro G. Feitelson, Metric and workload efforts on computer systems evaluation, Computer, Vol.36, No.9 (2003) [6] Doro G. Feitelson, Dan Tsafrir, Workload Sanitation for Performance Evaluation, ISPASS (2006) [7] J.H Ra, G.D choi, An Exploratory Study on Sizing Method for Information System Journal of Korea SI Society, Vol 3(2), pp. 9-20, [8] Missbach, M., Gibbels, P., Karnstädt J., Stelzel J., and Wagenblast, T., Adaptive Hardware Infrastructures for SAP, SAP Press (2005) 42
11 [9] National Computerization Agency, A Study on Hardware Sizing Technique And Framework for Information Systems, (2003.) [10] Rowe, L. and Jain, R.: ACM SIGMM Retreat Report, ACM Trans. Multimedia Computing, Communications, and Applications, vol. 1, no. 1, (2005) [11] Standard Performance Evaluation Corporation, [12] Transaction Processing Performance Council, U. Lublin and D. G. Fetelson, The workload on parallel supercomputers : modeling the characteristics of rigid job, J. Parallel & Distributed Computing, Vol.63, No.11 (2003) Authors Jonghei Ra Received a B.S. degree in inforamtion engineering from Sungkyunkwan University, Korea, 1990, and M.S. degree in Information engineering from Sungkyunkwan University, Korea, 1992 and Ph D. degree in Information engineering from Sungkyunkwan University, Korea, In 2003 he joined the faculty of Gwangju University, Korea where he is currently a professor in Department of e-business. His research interests include System performance, Web computing, information system audit, Data Mining. Haeyong Jung Received a B.S. degree in Management Information System from Kwangwoon University, Korea, 1996, and M.S. degree in Management Information System Kwangwoon University, Korea and Ph D. degree in Management Information System from Kwangwoon University, Korea, In 2003 he joined the faculty of Korea Nazarene University, Korea where he is currently a professor in Department of Management information system. His research interests include Information system evaluation, E-Learning, Management Innovation thru Information Technology. Kwangdon Choi Received a BA degree in Business Administration from Kwangwoon University, Korea, 1985, and MBA degree in Information Management Systems from Hankuk University of Foreign Studies, Korea, 1987 and Ph D. degree in Business Administration from Kwangwoon University, Korea, In 2002 he joined the faculty of Hansei University, Korea where he currently a professor in Department of Electronic Business. 43
12 44
E-Business Technologies
E-Business Technologies Craig Van Slyke and France Bélanger John Wiley & Sons, Inc. Slides by Fred Niederman 7-1 Client/Server Technologies for E-Business Chapter 7 7-2 Key Ideas E-commerce applications
How To Run A Hosted Physical Server On A Server At Redcentric
REDCENTRIC HOSTED PHYSICAL SERVER SERVICE DEFINITION SD027 V1.6 Issue Date 01 July 2014 1) OVERVIEW The Hosted Physical Server service (HPS) offers dedicated off-site server resource located within an
Multi-level Metadata Management Scheme for Cloud Storage System
, pp.231-240 http://dx.doi.org/10.14257/ijmue.2014.9.1.22 Multi-level Metadata Management Scheme for Cloud Storage System Jin San Kong 1, Min Ja Kim 2, Wan Yeon Lee 3, Chuck Yoo 2 and Young Woong Ko 1
James Serra Sr BI Architect [email protected] http://jamesserra.com/
James Serra Sr BI Architect [email protected] http://jamesserra.com/ Our Focus: Microsoft Pure-Play Data Warehousing & Business Intelligence Partner Our Customers: Our Reputation: "B.I. Voyage came
Educational Requirement Analysis for Information Security Professionals in Korea
Educational Requirement Analysis for Information Security Professionals in Korea Sehun Kim Dept. of Industrial Engineering, KAIST, 373-1, Kusong-dong, Yusong-gu, Taejon, 305-701, Korea [email protected]
How To Improve Performance On A Single Chip Computer
: Redundant Arrays of Inexpensive Disks this discussion is based on the paper:» A Case for Redundant Arrays of Inexpensive Disks (),» David A Patterson, Garth Gibson, and Randy H Katz,» In Proceedings
Rackspace Cloud Databases and Container-based Virtualization
Rackspace Cloud Databases and Container-based Virtualization August 2012 J.R. Arredondo @jrarredondo Page 1 of 6 INTRODUCTION When Rackspace set out to build the Cloud Databases product, we asked many
Best Practices for Consolidation Projects
Best Practices for Consolidation Projects Lynne Glickman Solution Architect Hewlett-Packard 2004 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without
Online Failure Prediction in Cloud Datacenters
Online Failure Prediction in Cloud Datacenters Yukihiro Watanabe Yasuhide Matsumoto Once failures occur in a cloud datacenter accommodating a large number of virtual resources, they tend to spread rapidly
Performance Comparison Analysis of Linux Container and Virtual Machine for Building Cloud
, pp.105-111 http://dx.doi.org/10.14257/astl.2014.66.25 Performance Comparison Analysis of Linux Container and Virtual Machine for Building Cloud Kyoung-Taek Seo 1, Hyun-Seo Hwang 1, Il-Young Moon 1, Oh-Young
Client/Server and Distributed Computing
Adapted from:operating Systems: Internals and Design Principles, 6/E William Stallings CS571 Fall 2010 Client/Server and Distributed Computing Dave Bremer Otago Polytechnic, N.Z. 2008, Prentice Hall Traditional
Distribution One Server Requirements
Distribution One Server Requirements Introduction Welcome to the Hardware Configuration Guide. The goal of this guide is to provide a practical approach to sizing your Distribution One application and
Hyperscale Use Cases for Scaling Out with Flash. David Olszewski
Hyperscale Use Cases for Scaling Out with Flash David Olszewski Business challenges Performanc e Requireme nts Storage Budget Balance the IT requirements How can you get the best of both worlds? SLA Optimized
Configuring Apache Derby for Performance and Durability Olav Sandstå
Configuring Apache Derby for Performance and Durability Olav Sandstå Database Technology Group Sun Microsystems Trondheim, Norway Overview Background > Transactions, Failure Classes, Derby Architecture
1 Storage Devices Summary
Chapter 1 Storage Devices Summary Dependability is vital Suitable measures Latency how long to the first bit arrives Bandwidth/throughput how fast does stuff come through after the latency period Obvious
A Grid Architecture for Manufacturing Database System
Database Systems Journal vol. II, no. 2/2011 23 A Grid Architecture for Manufacturing Database System Laurentiu CIOVICĂ, Constantin Daniel AVRAM Economic Informatics Department, Academy of Economic Studies
Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering
Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations A Dell Technical White Paper Database Solutions Engineering By Sudhansu Sekhar and Raghunatha
Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage
Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage Technical white paper Table of contents Executive summary... 2 Introduction... 2 Test methodology... 3
Virtualization: Concepts, Applications, and Performance Modeling
Virtualization: Concepts, s, and Performance Modeling Daniel A. Menascé, Ph.D. The Volgenau School of Information Technology and Engineering Department of Computer Science George Mason University www.cs.gmu.edu/faculty/menasce.html
Setting a new standard
Changing the UNIX landscape IBM pseries 690 nearly doubling the power of the pseries 680, previously the most powerful pseries server available. 2 The powerful IBM ^ pseries 690 Highlights Datacenter-class
The Methodology Behind the Dell SQL Server Advisor Tool
The Methodology Behind the Dell SQL Server Advisor Tool Database Solutions Engineering By Phani MV Dell Product Group October 2009 Executive Summary The Dell SQL Server Advisor is intended to perform capacity
Firebird and RAID. Choosing the right RAID configuration for Firebird. Paul Reeves IBPhoenix. mail: [email protected]
Firebird and RAID Choosing the right RAID configuration for Firebird. Paul Reeves IBPhoenix mail: [email protected] Introduction Disc drives have become so cheap that implementing RAID for a firebird
Mobile Cloud Computing Security Considerations
보안공학연구논문지 (Journal of Security Engineering), 제 9권 제 2호 2012년 4월 Mobile Cloud Computing Security Considerations Soeung-Kon(Victor) Ko 1), Jung-Hoon Lee 2), Sung Woo Kim 3) Abstract Building applications
Benchmarking With Your Head In The Cloud
Benchmarking With Your Head In The Cloud Karl Huppler [email protected] A Cloud is not All Clouds August 29, 2011 2 A Computing Services Cloud is not All Computing Services Clouds Seemingly obvious, yet
How To Test For Performance
: Roles, Activities, and QA Inclusion Michael Lawler NueVista Group 1 Today s Agenda Outline the components of a performance test and considerations Discuss various roles, tasks, and activities Review
How AWS Pricing Works
How AWS Pricing Works (Please consult http://aws.amazon.com/whitepapers/ for the latest version of this paper) Page 1 of 15 Table of Contents Table of Contents... 2 Abstract... 3 Introduction... 3 Fundamental
HARVARD RESEARCH GROUP, Inc.
HARVARD RESEARCH GROUP,Inc. 1740 MASSACHUSETTS AVENUE BOXBOROUGH, MASSACHUSETTS 01719 Tel (978) 263-3399 Vol1 The High-Availability Challenge 1999 Recently Harvard Research Group (HRG) completed the analysis
Cloud Computing Architecture: A Survey
Cloud Computing Architecture: A Survey Abstract Now a day s Cloud computing is a complex and very rapidly evolving and emerging area that affects IT infrastructure, network services, data management and
Scalability and BMC Remedy Action Request System TECHNICAL WHITE PAPER
Scalability and BMC Remedy Action Request System TECHNICAL WHITE PAPER Table of contents INTRODUCTION...1 BMC REMEDY AR SYSTEM ARCHITECTURE...2 BMC REMEDY AR SYSTEM TIER DEFINITIONS...2 > Client Tier...
Attachment 4-E Service Tier Matrix
Attachment to Data Center Services Service Component Provider Master Services Agreement DIR Contract No. DIR-DCS-SCP-MSA-003 Between The State of Texas, acting by and through the Texas Department of Information
System Architecture. In-Memory Database
System Architecture for Are SSDs Ready for Enterprise Storage Systems In-Memory Database Anil Vasudeva, President & Chief Analyst, Research 2007-13 Research All Rights Reserved Copying Prohibited Contact
Diablo and VMware TM powering SQL Server TM in Virtual SAN TM. A Diablo Technologies Whitepaper. May 2015
A Diablo Technologies Whitepaper Diablo and VMware TM powering SQL Server TM in Virtual SAN TM May 2015 Ricky Trigalo, Director for Virtualization Solutions Architecture, Diablo Technologies Daniel Beveridge,
Performance And Scalability In Oracle9i And SQL Server 2000
Performance And Scalability In Oracle9i And SQL Server 2000 Presented By : Phathisile Sibanda Supervisor : John Ebden 1 Presentation Overview Project Objectives Motivation -Why performance & Scalability
Bringing Value to the Organization with Performance Testing
Bringing Value to the Organization with Performance Testing Michael Lawler NueVista Group 1 Today s Agenda Explore the benefits of a properly performed performance test Understand the basic elements of
Lecture 36: Chapter 6
Lecture 36: Chapter 6 Today s topic RAID 1 RAID Redundant Array of Inexpensive (Independent) Disks Use multiple smaller disks (c.f. one large disk) Parallelism improves performance Plus extra disk(s) for
Virtuoso and Database Scalability
Virtuoso and Database Scalability By Orri Erling Table of Contents Abstract Metrics Results Transaction Throughput Initializing 40 warehouses Serial Read Test Conditions Analysis Working Set Effect of
Computer Information Systems
Computer Information System Courses Description 0309331 0306331 0309332 0306332 0309334 0306334 0309341 0306341 0309353 0306353 Database Systems Introduction to database systems, entity-relationship data
HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief
Technical white paper HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief Scale-up your Microsoft SQL Server environment to new heights Table of contents Executive summary... 2 Introduction...
FROM RELATIONAL TO OBJECT DATABASE MANAGEMENT SYSTEMS
FROM RELATIONAL TO OBJECT DATABASE MANAGEMENT SYSTEMS V. CHRISTOPHIDES Department of Computer Science & Engineering University of California, San Diego ICS - FORTH, Heraklion, Crete 1 I) INTRODUCTION 2
NEC Corporation of America Intro to High Availability / Fault Tolerant Solutions
NEC Corporation of America Intro to High Availability / Fault Tolerant Solutions 1 NEC Corporation Technology solutions leader for 100+ years Established 1899, headquartered in Tokyo First Japanese joint
How AWS Pricing Works May 2015
How AWS Pricing Works May 2015 (Please consult http://aws.amazon.com/whitepapers/ for the latest version of this paper) Page 1 of 15 Table of Contents Table of Contents... 2 Abstract... 3 Introduction...
ICRI-CI Retreat Architecture track
ICRI-CI Retreat Architecture track Uri Weiser June 5 th 2015 - Funnel: Memory Traffic Reduction for Big Data & Machine Learning (Uri) - Accelerators for Big Data & Machine Learning (Ran) - Machine Learning
Sizing guide for SAP and VMware ESX Server running on HP ProLiant x86-64 platforms
Sizing guide for SAP and VMware ESX Server running on HP ProLiant x86-64 platforms Executive summary... 2 Server virtualization overview... 2 Solution definition...2 SAP architecture... 2 VMware... 3 Virtual
What is RAID? data reliability with performance
What is RAID? RAID is the use of multiple disks and data distribution techniques to get better Resilience and/or Performance RAID stands for: Redundant Array of Inexpensive / Independent Disks RAID can
How To Create A Multi Disk Raid
Click on the diagram to see RAID 0 in action RAID Level 0 requires a minimum of 2 drives to implement RAID 0 implements a striped disk array, the data is broken down into blocks and each block is written
Mainframe alternative Solution Brief. MFA Sizing Study for a z/os mainframe workload running on a Microsoft and HP Mainframe Alternative (MFA)
Mainframe alternative Solution Brief MFA Sizing Study for a z/os mainframe workload running on a Microsoft and HP Mainframe Alternative (MFA) Mainframe alternative Solution Brief MFA Sizing Study for a
Comparison of Cloud vs. Tape Backup Performance and Costs with Oracle Database
JIOS, VOL. 35, NO. 1 (2011) SUBMITTED 02/11; ACCEPTED 06/11 UDC 004.75 Comparison of Cloud vs. Tape Backup Performance and Costs with Oracle Database University of Ljubljana Faculty of Computer and Information
COMP 7970 Storage Systems
COMP 797 Storage Systems Dr. Xiao Qin Department of Computer Science and Software Engineering Auburn University http://www.eng.auburn.edu/~xqin [email protected] COMP 797, Auburn University Slide 3b- Problems
Virtualization what it is?
IBM EMEA Virtualization what it is? Tomaž Vincek System p FTSS CEMAAS [email protected] 2007 IBM Corporation What is needed? A very strong requirement: reduce costs and optimize investments On a
System Infrastructure Non-Functional Requirements Related Item List
System Infrastructure Non-Functional Requirements Related Item List April 2013 Information-Technology Promotion Agency, Japan Software Engineering Center Copyright 2010 IPA [Usage conditions] 1. The copyright
Open Cloud Computing A Case for HPC CRO NGI Day Zagreb, Oct, 26th
Open Cloud Computing A Case for HPC CRO NGI Day Zagreb, Oct, 26th Philippe Trautmann HPC Business Development Manager Global Education @ Research Sun Microsystems, Inc. 1 The Cloud HPC and Cloud: any needs?
Managing your Domino Clusters
Managing your Domino Clusters Kathleen McGivney President and chief technologist, Sakura Consulting www.sakuraconsulting.com Paul Mooney Senior Technical Architect, Bluewave Technology www.bluewave.ie
The Impact of PaaS on Business Transformation
The Impact of PaaS on Business Transformation September 2014 Chris McCarthy Sr. Vice President Information Technology 1 Legacy Technology Silos Opportunities Business units Infrastructure Provisioning
Performance and scalability of a large OLTP workload
Performance and scalability of a large OLTP workload ii Performance and scalability of a large OLTP workload Contents Performance and scalability of a large OLTP workload with DB2 9 for System z on Linux..............
Online Remote Data Backup for iscsi-based Storage Systems
Online Remote Data Backup for iscsi-based Storage Systems Dan Zhou, Li Ou, Xubin (Ben) He Department of Electrical and Computer Engineering Tennessee Technological University Cookeville, TN 38505, USA
0408 - Avoid Paying The Virtualization Tax: Deploying Virtualized BI 4.0 The Right Way. Ashish C. Morzaria, SAP
0408 - Avoid Paying The Virtualization Tax: Deploying Virtualized BI 4.0 The Right Way Ashish C. Morzaria, SAP LEARNING POINTS Understanding the Virtualization Tax : What is it, how it affects you How
IOS110. Virtualization 5/27/2014 1
IOS110 Virtualization 5/27/2014 1 Agenda What is Virtualization? Types of Virtualization. Advantages and Disadvantages. Virtualization software Hyper V What is Virtualization? Virtualization Refers to
Express5800 Scalable Enterprise Server Reference Architecture. For NEC PCIe SSD Appliance for Microsoft SQL Server
Express5800 Scalable Enterprise Server Reference Architecture For NEC PCIe SSD Appliance for Microsoft SQL Server An appliance that significantly improves performance of enterprise systems and large-scale
Operating System for the K computer
Operating System for the K computer Jun Moroo Masahiko Yamada Takeharu Kato For the K computer to achieve the world s highest performance, Fujitsu has worked on the following three performance improvements
HP Smart Array Controllers and basic RAID performance factors
Technical white paper HP Smart Array Controllers and basic RAID performance factors Technology brief Table of contents Abstract 2 Benefits of drive arrays 2 Factors that affect performance 2 HP Smart Array
Ensuring Security in Cloud with Multi-Level IDS and Log Management System
Ensuring Security in Cloud with Multi-Level IDS and Log Management System 1 Prema Jain, 2 Ashwin Kumar PG Scholar, Mangalore Institute of Technology & Engineering, Moodbidri, Karnataka1, Assistant Professor,
Optimizing Shared Resource Contention in HPC Clusters
Optimizing Shared Resource Contention in HPC Clusters Sergey Blagodurov Simon Fraser University Alexandra Fedorova Simon Fraser University Abstract Contention for shared resources in HPC clusters occurs
View Point. Performance Monitoring in Cloud. www.infosys.com. Abstract. - Vineetha V
View Point Performance Monitoring in Cloud - Vineetha V Abstract Performance Monitoring is an integral part of maintenance. Requirements for a monitoring solution for Cloud are totally different from a
Best Practices for Monitoring Databases on VMware. Dean Richards Senior DBA, Confio Software
Best Practices for Monitoring Databases on VMware Dean Richards Senior DBA, Confio Software 1 Who Am I? 20+ Years in Oracle & SQL Server DBA and Developer Worked for Oracle Consulting Specialize in Performance
5 Performance Management for Web Services. Rolf Stadler School of Electrical Engineering KTH Royal Institute of Technology. [email protected].
5 Performance Management for Web Services Rolf Stadler School of Electrical Engineering KTH Royal Institute of Technology [email protected] April 2008 Overview Service Management Performance Mgt QoS Mgt
CASE STUDY: Oracle TimesTen In-Memory Database and Shared Disk HA Implementation at Instance level. -ORACLE TIMESTEN 11gR1
CASE STUDY: Oracle TimesTen In-Memory Database and Shared Disk HA Implementation at Instance level -ORACLE TIMESTEN 11gR1 CASE STUDY Oracle TimesTen In-Memory Database and Shared Disk HA Implementation
The Design and Implementation of the Integrated Model of the Advertisement and Remote Control System for an Elevator
Vol.8, No.3 (2014), pp.107-118 http://dx.doi.org/10.14257/ijsh.2014.8.3.10 The Design and Implementation of the Integrated Model of the Advertisement and Remote Control System for an Elevator Woon-Yong
COMPONENTS in a database environment
COMPONENTS in a database environment DATA data is integrated and shared by many users. a database is a representation of a collection of related data. underlying principles: hierarchical, network, relational
Cloud Based Application Architectures using Smart Computing
Cloud Based Application Architectures using Smart Computing How to Use this Guide Joyent Smart Technology represents a sophisticated evolution in cloud computing infrastructure. Most cloud computing products
Brainlab Node TM Technical Specifications
Brainlab Node TM Technical Specifications BRAINLAB NODE TM HP ProLiant DL360p Gen 8 CPU: Chipset: RAM: HDD: RAID: Graphics: LAN: HW Monitoring: Height: Width: Length: Weight: Operating System: 2x Intel
How to Manage IT Resource Consumption
How to Manage IT Resource Consumption At an Application Level with TeamQuest Workloads In this paper, John Miecielica of Metavante, provides a step-by-step example showing how he uses TeamQuest software
ensurcloud Service Level Agreement (SLA)
ensurcloud Service Level Agreement (SLA) Table of Contents ensurcloud Service Level Agreement 1. Overview... 3 1.1. Definitions and abbreviations... 3 2. Duties and Responsibilities... 5 2.1. Scope and
Project management for cloud computing development
Page 16 Oeconomics of Knowledge, Volume 2, Issue 2, 2Q 2010 Project management for cloud computing development Paul POCATILU, PhD, Associate Professor Department of Economic Informatics Academy of Economic
Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers
Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers White Paper rev. 2015-11-27 2015 FlashGrid Inc. 1 www.flashgrid.io Abstract Oracle Real Application Clusters (RAC)
Capacity Estimation for Linux Workloads
Capacity Estimation for Linux Workloads Session L985 David Boyes Sine Nomine Associates 1 Agenda General Capacity Planning Issues Virtual Machine History and Value Unique Capacity Issues in Virtual Machines
Chapter 5: System Software: Operating Systems and Utility Programs
Understanding Computers Today and Tomorrow 12 th Edition Chapter 5: System Software: Operating Systems and Utility Programs Learning Objectives Understand the difference between system software and application
A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing
A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing Liang-Teh Lee, Kang-Yuan Liu, Hui-Yang Huang and Chia-Ying Tseng Department of Computer Science and Engineering,
Designing and Embodiment of Software that Creates Middle Ware for Resource Management in Embedded System
, pp.97-108 http://dx.doi.org/10.14257/ijseia.2014.8.6.08 Designing and Embodiment of Software that Creates Middle Ware for Resource Management in Embedded System Suk Hwan Moon and Cheol sick Lee Department
MONITORING A WEBCENTER CONTENT DEPLOYMENT WITH ENTERPRISE MANAGER
MONITORING A WEBCENTER CONTENT DEPLOYMENT WITH ENTERPRISE MANAGER Andrew Bennett, TEAM Informatics, Inc. Why We Monitor During any software implementation there comes a time where a question is raised
HP ProLiant BL460c takes #1 performance on Siebel CRM Release 8.0 Benchmark Industry Applications running Linux, Oracle
HP ProLiant BL460c takes #1 performance on Siebel CRM Release 8.0 Benchmark Industry Applications running Linux, Oracle HP first to run benchmark with Oracle Enterprise Linux HP Leadership» The HP ProLiant
Legal Notices... 2. Introduction... 3
HP Asset Manager Asset Manager 5.10 Sizing Guide Using the Oracle Database Server, or IBM DB2 Database Server, or Microsoft SQL Server Legal Notices... 2 Introduction... 3 Asset Manager Architecture...
vrops Microsoft SQL Server MANAGEMENT PACK OVERVIEW
vrops Microsoft SQL Server MANAGEMENT PACK OVERVIEW What does Blue Medora do? We connect business critical applications, databases, storage, and converged systems to leading virtualization and cloud management
Performance Prediction, Sizing and Capacity Planning for Distributed E-Commerce Applications
Performance Prediction, Sizing and Capacity Planning for Distributed E-Commerce Applications by Samuel D. Kounev ([email protected]) Information Technology Transfer Office Abstract Modern e-commerce
VERITAS Database Edition 2.1.2 for Oracle on HP-UX 11i. Performance Report
VERITAS Database Edition 2.1.2 for Oracle on HP-UX 11i Performance Report V E R I T A S W H I T E P A P E R Table of Contents Introduction.................................................................................1
HP ProLiant BL460c achieves #1 performance spot on Siebel CRM Release 8.0 Benchmark Industry Applications running Microsoft, Oracle
HP ProLiant BL460c achieves #1 performance spot on Siebel CRM Release 8.0 Benchmark Industry Applications running Microsoft, Oracle HP ProLiant BL685c takes #2 spot HP Leadership» The HP ProLiant BL460c
MS SQL Performance (Tuning) Best Practices:
MS SQL Performance (Tuning) Best Practices: 1. Don t share the SQL server hardware with other services If other workloads are running on the same server where SQL Server is running, memory and other hardware
Axapta Object Server MICROSOFT BUSINESS SOLUTIONS AXAPTA
MICROSOFT BUSINESS SOLUTIONS AXAPTA Axapta Object Server Microsoft Business Solutions Axapta Object Server minimises bandwidth requirements and ensures easy, efficient and homogeneous client deployment.
HP Smart Array 5i Plus Controller and Battery Backed Write Cache (BBWC) Enabler
Overview HP Smart Array 5i Plus Controller and Battery Backed Write Cache (BBWC) Enabler Models Smart Array 5i Plus Controller and BBWC Enabler bundled Option Kit (for ProLiant DL380 G2, ProLiant DL380
Real Time Network Server Monitoring using Smartphone with Dynamic Load Balancing
www.ijcsi.org 227 Real Time Network Server Monitoring using Smartphone with Dynamic Load Balancing Dhuha Basheer Abdullah 1, Zeena Abdulgafar Thanoon 2, 1 Computer Science Department, Mosul University,
Performance Management for Cloud-based Applications STC 2012
Performance Management for Cloud-based Applications STC 2012 1 Agenda Context Problem Statement Cloud Architecture Key Performance Challenges in Cloud Challenges & Recommendations 2 Context Cloud Computing
A STUDY OF WORKLOAD CHARACTERIZATION IN WEB BENCHMARKING TOOLS FOR WEB SERVER CLUSTERS
382 A STUDY OF WORKLOAD CHARACTERIZATION IN WEB BENCHMARKING TOOLS FOR WEB SERVER CLUSTERS Syed Mutahar Aaqib 1, Lalitsen Sharma 2 1 Research Scholar, 2 Associate Professor University of Jammu, India Abstract:
