Distributed Systems Meet Economics: Pricing in the Cloud
|
|
|
- Abigayle Wilkinson
- 10 years ago
- Views:
Transcription
1 Distributed Systems Meet Economics: Pricing in the Cloud Hongyi Wang Qingfeng Jing Rishan Chen Bingsheng He Zhengping Qian Lidong Zhou Microsoft Research Asia Shanghai Jiao Tong University Peking University Abstract Cloud computing allows users to perform computation in a public cloud with a pricing scheme typically based on incurred resource consumption. While cloud computing is often considered as merely a new application for classic distributed systems, we argue that, by decoupling users from cloud providers with a pricing scheme as the bridge, cloud computing has fundamentally changed the landscape of system design and optimization. Our preliminary studies on Amazon EC2 cloud service and on a local cloud computing testbed, have revealed an interesting interplay between distributed systems and economics related to pricing. We believe that this new angle of looking at distributed systems potentially fosters new insights into cloud computing. 1 Introduction Recent cloud providers (e.g., Amazon Web Services, Google App Engine, and Windows Azure) have enabled users to perform their computation tasks in a public cloud. These providers use a pricing scheme according to incurred resource consumption. For example, Amazon EC2 provides a virtual machine with a single CPU core at the price of $0.095 per hour. This pay-as-you-go model lets users utilize a public cloud at a fraction of the cost of owning a dedicated private one, while allowing providers to profit by serving a large number of users. Case studies from these cloud providers [2, 10, 27] indicate that a variety of applications have been deployed in the cloud, such as storage backup, e-commerce and high-performance computing. For a provider, it is a nontrivial task to define a uniform pricing scheme for such a diverse set of applications. This cloud-computing paradigm has transformed a traditional distributed system into a two-party computation with pricing as the bridge. A provider designs its infrastructure to maximize profit with respect to the pricing scheme, while a user designs her application according to the incurred cost. This is in contrast to a traditional distributed system, where the goal is to optimize for throughput, latency, or other system metrics as a single and whole system. Pricing in cloud computing has two intertwined aspects. On the one hand, pricing has its root in system design and optimization. Resource-consumption based pricing is particularly sensitive to how a system is designed, configured, optimized, monitored, and measured (as shown in Section 4). On the other hand, pricing also has its root in economics, where key concepts such as fairness [17] and competitive pricing in a multi-provider marketplace affect the actual pricing. The pricing-induced interplay between systems and economics has fundamental implications on cloud computing, an important angle that should be explored by researchers. In this paper, we explore several dimensions of such an interplay. Cost as an explicit and measurable system metric. With pricing, the dollar cost of computation becomes an explicit and measurable metric for system optimizations. This begs the questions: how do we optimize a system based on this new metric? How is this metric related to traditional system metrics such as throughput? If both providers and users optimize based on their dollar cost and profit, does this lead to a globally optimal system that is most effective in getting the work done at the lowest cost? Clearly, pricing impacts the answers to those questions. Pricing fairness. The key concepts of pricing such as competition and fairness affect choices in the design of user applications and system infrastructures. As a start, we investigate the pricing fairness in the cloud. Since users and providers have different and often conflicting incentives, pricing fairness balances user cost and provider profit. Even within the scope of pricing based on resource utilizations, we have choices on what to charge (e.g., virtual machine time vs. actual CPU time)
2 and different approaches achieve different levels of fairness. Evolving system dynamics. The underlying systems for cloud computing evolve over time: individual machines might grow more powerful, have more CPU cores [6], accelerator [13] and/or adopt new storage medium such as flash in place of disks [16]. Will a provider gain a competitive advantage by adopting those new technologies for selected applications? Should pricing evolve along with innovations in systems? Cost of failures. Failures are bound to happen in cloud computing, due to both hardware failures and software issues [4]. Traditional system design cares mostly about tolerating failures and about recovery from failures. In a pay-as-you-go cloud, we have to worry also about the expenses incurred by failures and more importantly face the question of who is to be responsible for those expenses. Under the current pricing scheme used by Amazon, those expenses fall entirely on the shoulder of users, regardless of root causes. It is not our intention to answer all these questions in this paper. Rather, we focus on making the case that those problems are worth exploring in the context of the current state of affairs in cloud computing. We do so through our preliminary studies on Amazon EC2, one of the major cloud providers, and with Spring, our homegrown testbed. Amazon EC2 provides a real cloud service, which we can study as a black box, while Spring offers us the opportunity to investigate the underlying distributed system. We find that (i) optimizing for cost does not necessarily lead to an optimal system; optimization is hard for users due to their limited knowledge of the underlying mechanisms that a provider has in place; (ii) pricing unfairness is evident with the current pricing scheme used by Amazon; (iii) Different system configurations have a significant impact on the cost and profit; (iv) failures do appear and incur non-negligible expenses on users. Organization. The rest of the paper is organized as follows. We introduce the background on pricing in Section 2. Section 3 provides an overview of our experimental methodologies, followed by the experimental results in Section 4. We conclude and discuss future work in Section 5. 2 Background on Pricing This section introduces the background on the pricing scheme, and related studies on pay-as-you-go charging. 2.1 Pricing Pricing plays a key role in the marketplace, which has been well studied in economics [17]. Among various factors that impact pricing, fairness [19] and competition [17] are the most relevant to our current study. Pricing fairness consists of two aspects: personal and social fairness [19]. Personal fairness is subjective, meaning that price meets users personal expectation. Social fairness is objective, meaning that price is the same for all users, does not give a provider unreasonably high profits and so on. For example, the differentiated economy class air ticket prices are socially unfair because some passengers have to pay more than others. Maxwell summarized the fairness in economic exchange: what is priced, who gets price exceptions, what is included in the price, and so on. For example, a fair price should be charged to everyone, but adjusted for the needy. We refer readers to Maxwell [19] for more details. In this paper, we focus on the social fairness of the pricing scheme, and examine whether price is the same for all users. Competition unleashes the power of markets in economics [17]. With competition, providers cannot set their prices in a way most favorable to them. Instead, they gain a competitive advantage through adopting new technology and lowering their cost. 2.2 Pay-as-you-go Model Pricing is not only important for economics, but also helps to shape how systems are used; for example, pricing can control congestion on Internet resources [18], and adjust the computation resource demand and supply in the grid [7]. In the pay-as-you-go model, the pricing scheme becomes an important bridge between users and providers. The current practice among major cloud providers is to price computing based on virtual-machine hours; for example, Amazon charges $0.095 per virtual-machine hour. Moreover, pricing schemes are evolving and more pricing schemes are introduced. For example, Amazon now has a set of different pricing schemes including auction pricing. Additionally, several alternative pricing schemes have been proposed for better system behavior in the cloud. Singh et al. [23] suggested dynamic pricing when reserving computation resources. Jimenez et al. [20] developed a bilateral accounting model between users and providers to avoid malicious overcharge. This paper focuses on the common one among major cloud providers, charging based on incurred virtual-machine hours. Evaluating and optimizing user expenses in a pay-asyou-go cloud has recently attracted research interest [4, 8]. Napper et al. [21] and Walker [26] compared Amazon EC2 with a private cloud for high-performance computing. The cost, availability, and performance of Amazon services was studied with simple operations [22, 9]. In
3 contrast, this study focuses on both costs and profits with respect to pricing, and the resulted interplay between systems and economics. 3 Methodology Overview We have assembled a set of workloads to approximate a typical workload in current cloud computing. With these workloads, we use two complementary approaches for evaluations. One is a black-box approach with Amazon EC2. As other cloud providers such as Google and Microsoft use similar pricing schemes, we expect that our pricing-related findings to be applicable to those as well. Our second approach is to set up a cloud-computing testbed, called Spring, so that we can perform fullycontrolled experiments with the full knowledge of how the underlying system works. 3.1 Workloads We have identified several popular applications to simulate different applications domains listed in the case studies in cloud providers [2, 10, 27]. Postmark. We use Postmark [15] as an I/O-intensive benchmark, representing the file transactions for various web-based applications. The default setting in our experiment is as follows: the total file size is around 5 GB (1000 files, 5000 KB each); the number of transactions is PARSEC. PARSEC [5] is a benchmark suite composed of real-world applications. We choose Dedup and BlackScholes to represent storage archival and high-performance computing in the cloud, respectively. Dedup compresses a file with de-duplication. BlackScholes calculates the prices for European options. The default setting is as follows: Dedup has around 184 MB input data for deduplication; the number of options for BlackScholes is 10 million. Hadoop. We use Hadoop for large-scale data processing. We choose WordCount and StreamSort from the GridMix benchmark [11]. The default input data set is 16GB for both applications. 3.2 Methodology on Amazon EC2 In the experiments with Amazon EC2, our execution is charged according to the pricing scheme of Amazon. We consider the amortized cost in a long running scenario, and calculate user expenses as Cost user = Price t, where t is the total running time of the task in hours, and Price is the price per virtual machine hour. We exclude the costs on storage and on data transfer between the client and the cloud, since they are negligible in our experiments (less than 1% of the total cost). 3.3 Methodology on the Spring System Spring virtualizes the underlying physical data center and provides virtual machines to users. Spring consists of two major modules, namely VMM (Virtual Machine Monitor) and an auditor. VMM is responsible for VM (Virtual Machine) allocation, consolidation and migration among different physical machines. The auditor calculates user expenses and estimates profits for the provider. The estimation helps to understand the impact of pricing in the cloud. We estimate the provider profit by subtracting the total provider cost from the expected payment from users. While the total user payment is directly measured from incurred virtual machine hours, it is challenging to have an accurate estimation on the provider profit in a real data center. As a starting point, we adopt Hamilton s estimation of the total cost of a large-scale data center [12]. Hamilton calculates the amortized cost of running a data center as the sum of server costs, power consumption, power and cooling infrastructures and other costs. Hamilton further defines full burdened power consumption as the total power consumption of IT equipments, and power and cooling infrastructures [12]. Following Hamilton s estimations, we calculate the total cost of the full burdened power consumption to be Cost full = (p P raw PUE), where p is the electricity price (dollars per kwh), P raw is the total energy consumption of IT equipments including all the servers and routers (kwh), PUE is the PUE value of the data center [25]. PUE [25] is a metric to measure the efficiency of a data center s power distribution and mechanical equipment. The total provider cost is estimated with Cost provider = (Cost full + Cost amortized ) Scale, where Cost amortized is the total amortized server cost, and Scale is the ratio of the estimated total cost to the sum of the cost of full burdened power consumption and Cost amortized in Hamilton s estimation. To estimate Cost amortized, we estimate the amortized cost per sever to be (C amortizedunit t server ), where C amortizedunit is the amortized cost per hour per sever and t server is the elapsed time on the server (hours). We further estimate P raw as the total power consumption of servers and routers. For a server, we use a simple linear regression model [14] estimating the energy consumption based on resource utilization, i.e., P server = P idle + u cpu c 0 + u io c 1, given CPU utilization and I/O bandwidth u cpu % and u io MB/sec, respectively, and c 0 and c 1 are the coefficients in the model. We adopt the power consumption model of the network router from a previous study [3].
4 Instance Type CPU (#virtual core) RAM (GB) Storage (GB) Price ($/h) Small Medium Table 1: The configurations and prices on different VM types on Amazon (Linux, California, America, Jan- 2010) 4 Preliminary Evaluations In order to identify various dimensions of the pricinginduced interplay between systems and economics, we have performed a series of experiments on both Amazon EC2 and on Spring. 4.1 Experimental Setup The same set of experiments is run on both Amazon EC2 and Spring. Setup in Amazon EC2. We use the two default ondemand virtual-machine types provided by EC2, namely small and medium instances. These virtual machines run Fedora Linux and are located in California, USA. Table 1 shows their configurations and prices. Setup in Spring. We use VirtualBox [24] to implement a virtual machine in Spring. VirtualBox allows us to specify the allocation of computation resources so that we can approximate the configurations and prices of different kinds of instances on Amazon. The host OS is Windows Server 2003; the guest OS is Fedora 10. The physical machine configuration is shown in Table 2. We use an eight-core machine to evaluate the single-machine benchmarks, and a cluster consisting of 32 four-core machines to evaluate Hadoop. We assign a CPU core exclusively to a VM to approximate a virtual core in Amazon EC2. For example, we consolidate at most four medium instances on the eight-core machine. We also list the parameter values in our power consumption model for both types of machines. We use a power meter [1] to measure the actual power consumption of a server, and construct the power consumption model based on these measurements. We have validated the power consumption model, and the estimation is close to the measured value given by a real power meter. For estimating the total dollar cost, we follow Hamilton s parameter settings on a data center of 50 thousand servers [12]: PUE = 1.7, Scale = 2.24, energy price p = $0.07 per kwh and amortized server unit cost C amortizedunit = $0.08 per hour. As an example of evaluating the effect of hardware changes, we evaluate the case of flash in place of disks. In that evaluation, we use an Intel 80 GB X25-M SSD (Solid State Drives) to replace a SATA hard drive. The Eight-core machine Four-core machine CPU Intel Xeon E way 2.00GHz Intel Xeon X3360 Quad 2.83GHz RAM (GB) 32 8 Disk RAID 5 (SCSI RAID 0 (SATA disks) disks) Network 1 Gigabit 1 Gigabit Power model P idle = 299, c 0 = P idle = 250, c 0 = 0.46, c 1 = , c 1 = 0.14 Table 2: Hardware configuration of machines in Spring On a small instance Elapsed time (sec) Cost ($) On a medium instance Elapsed time (sec) Cost ($) Postmark Dedup BlackScholes Table 3: Elapsed time and costs of single-machine benchmarks on small and medium instances on EC2 current price of the SSD is around $350, and the price of a SATA hard drive (500GB) is around $50. We adjust the amortized cost in the machine with an SSD to $0.09 per hour. Compared with hard disks, SSDs also offer a power efficiency advantage, and we adjust power consumption accordingly. We study the system throughput of Spring in numbers of tasks finished per hour, as well as user costs and provider profits. Additionally, we calculate the efficiency of a provider s investment using ROI (Return on Investment), i.e., ROI = Profit Cost provider 100%. 4.2 Optimizing for Cost We study the difference between optimizations for cost and optimizations for performance on users and providers separately. We first present the results of user optimizations on EC2, since the results on Spring are similar to those on EC2. Next, we present the results of provider optimizations, including consolidation and different workload scheduling algorithms on Spring. (a) Time for Hadoop (b) Costs for Hadoop Figure 1: Performance and costs for Hadoop vs. number of same-type instances on EC2 the
5 #VM per physical machine One VM Two VMs Four VMs Average elapsed time (sec) Average cost per task ($) Total cost of users ($) P raw (kwh) Cost provider ($) Profit ($) ROI (%) -40.0% 17.2% 142.0% Throughput (tasks/h) Table 4: Effects of virtual-machine consolidation in Spring (every four Postmark, small VM type) User Optimizations on EC2 User optimizations on EC2 include application-level optimizations for a fixed instance type, choosing the suitable instance type, and tuning the number of instances. For a fixed instance type, we tune the number of threads for the elapsed time and cost on Postmark and PARSEC. Such tuning improves both performance and user cost. On current consumption-based pricing, there is no gap between optimizations for performance and optimizations for cost. Choosing the suitable instance type is important for both performance and cost. Table 3 shows the elapsed times and costs of optimized single-machine benchmarks. Postmark has slightly larger elapsed time on a small instance, but achieves almost 50% smaller cost on a small instance than on a medium instance. This indicates a conflict in choosing the suitable instance type between for performance and for cost. In contrast, Dedup and BlackScholes on a medium instance have a smaller execution time, and a smaller cost than those on a small instance. This indicates that choosing the suitable instance type for cost may not result in the best performance, and vice versa. Figure 1 shows the performance and cost of Hadoop as we vary the number of instances from four to sixteen. There is no clear patten in terms of cost when we vary the number of instances and the instance types. Again, the setting that achieves the minimum cost differs from that of the best performance Provider Optimizations on Spring We mainly focus on VM consolidation optimization on Spring, which tune the number of concurrent VMs running on the same physical machine. We first study the effect of virtual-machine consolidation when running the same set of tasks on Spring. We vary the number of VMs on the eight-core machine from one to four. Tables 4 and 5 show the results for running Postmark and BlackScholes tasks continuously, re- #VM per physical machine One VM Two VMs Four VMs Elapsed time (sec) Average cost per task ($) Total cost of users ($) P raw (kwh) Cost provider ($) Profit ($) ROI (%) 22.8% 140.8% 365.2% Throughput (tasks/h) Table 5: Effects of virtual-machine consolidation in Spring (every four BlackScholes, medium VM type) spectively. We report the number when the execution is steady. We choose Postmark and BlackScholes to run on small and medium instances respectively, as they have a smaller cost on those VM types, similar to the results observed on EC2. We do not present the results for Dedup, since they are similar to those of BlackScholes. We make the following three observations. First, consolidation greatly reduces power consumption. In this experiment, consolidation reduces around 150% and 21% on P raw for BlackScholes and Postmark,respectively. Second, since consolidation increases the total cost for users and decreases the cost of power consumption, a provider s profit increases significantly, so does its ROI. In our estimation, without consolidation, a provider actually loses money on Postmark. With consolidation, a provider enjoys a significant improvement in ROI, with an increase of 180% and 340% on Postmark and BlackScholes, respectively. To make profit, a provider should choose a suitable consolidation strategy. Finally, as more tasks are consolidated to the same physical machine, the throughput reaches a peak at consolidating two VMs with Postmark, and then degrades. Although a provider can make higher profit on consolidating more VMs, the system throughput can degrade up to over 64% compared with the peak. This points to a potential flaw in pricing: as a provider adopts a strategy that maximizes its profit, the strategy could lead to sub-optimal system throughput. We further study multi-machine benchmarks on Hadoop. Table 6 shows the results of two cases for running Hadoop on eight VMs. Consolidation also significantly increases a provider s profit, with an increase of 135% and 118% on ROI for WordCount and StreamSort, respectively. However, all these increases in the profit come at the cost of degrading system throughput with a reduction of 12% and 350% on WordCount and Stream- Sort, respectively. This confirms the problem we have seen in single-machine benchmarks.
6 WordCount StreamSort #VM per physical machine One Two One Two Total cost of users ($) Profit ($) ROI (%) 25.0% 150.1% 22.8% 140.8% Throughput (tasks/h) Table 6: Effects of virtual-machine consolidation in Spring (with Hadoop, eight VMs) Topic #Threads Ratio Clarification on charging (e.g., the definition of instance hours) Complaints on charging too high for certain scenarios (e.g., charging on the idle instance) Cost comparison among different vendors (Amazon, Google and Microsoft etc) Different prices (Reserved mode, daily charge, debugging fee etc) Others (Security, economics, software licensing etc) % % % % % Table 7: Categorized threads in official forums on Amazon EC2, Google App Engine and Windows Azure 4.3 Pricing Fairness Personal fairness. To understand users concerns with pricing fairness, we surveyed the official forums of recent cloud providers. We have obtained the threads related to pricing by searching with keywords price or pricing. We have found 173 threads about pricing (up to Jan 10, 2010), each thread consisting of multiple replies. We further manually categorize them according to the topic, and the categorization is shown in Table 7. Users mostly need clarification on how they are charged. The second most popular topic is about complaints on charing too high for certain scenarios, which is an indication of personal unfairness in the pricing scheme. Social Fairness. We investigate the social fairness of the pricing scheme by examining the variation of costs for the same task. We use two metrics to measure the cost variation: the coefficient of variation (cv = stdev mean 100%), and the maximum difference (maxdiff = Hi Lo Lo 100%, where Hi and Lo are the highest and lowest values among different runs, respectively). We first look at the variations of different runs on the same instances in Amazon EC2. We run each singlemachine benchmark ten times. Table 8 shows the variations of execution time of the single-machine benchmarks. The variations are significant. For example, the worst case of Postmark incurs a cost that is 40% higher Postmark Dedup BlackScholes cv 9.1% 11.0% 3.9% maxdiff 40.1% 38.8% 12.6% Table 8: Variation of different runs on EC2 Figure 2: Variations among three instances (Postmark) on EC2 than its best case. We also investigate the variations among different concurrent instances, which represents the scenarios of different users using a cloud simultaneously. Figure 2 shows the cost of running Postmark ten times on three different small instances. There is noticeable differences among different instances. The average cost on Instance 3 is over 20% higher than those on Instances 1 and 2. We have observed similar variations in Spring to those in Amazon EC2 (as shown in Tables 4, 5 and 6 of Section 4.2.2). As more VMs are consolidated onto the same physical machine, users pay more money for the same task. For example, postmark with Four VMs costs around three times of that with One VM. The cost variations on both Amazon and Spring indicate social unfairness of the current pricing scheme. 4.4 Different Hardware Configurations Figure 3 shows the costs and profits of running a Postmark task on a small instance with a hard disk and an SSD. The elapsed times of Postmark are around 180 and 400 seconds on the SSD and on the hard disk, respectively. Using the SSD greatly reduces the user s cost by 120%, and slightly decreases the provider s ROI from - 40% to -44%. By supporting cost-efficient services to users, the provider has the power in adapting its price scheme to its advantage. Furthermore, it is likely that a provider needs to fine-tune its pricing structure, so that it can balance the benefit to users and the profit to itself. Figure 3: Costs and profits of running Postmark on a hard disk and an SSD without consolidation in Spring.
7 4.5 Failures During our study, we have met a few bugs in the benchmarks on Amazon EC2, even after testing the implementations on Spring. Here is one example: we have successfully executed Hadoop in Spring, but still met one Hadoop exception with a message Address already in use 1 on Amazon EC2. Since this kind of bug originates from the software stack of Hadoop interacting with Amazon EC2, users can hardly prevent such bugs without paying them. Bugs are not the only cause of failures; transient failures in the cloud infrastructure also occur. For example, we observed a task failure when we ran StreamSort using Hadoop on eight VMs in Spring, causing a sharp eighttime increase in the total elapsed time. This shows that transient failures in the underlying infrastructure could be a significant factor for provisioning user costs. 5 Concluding Remarks By embracing a pricing scheme that connects providers with users, cloud computing has managed to bridge between distributed systems and economics. In this paper, we focus on understanding the implications of this new paradigm and its impact on distributed system research. Our preliminary study has revealed interesting issues as a result of the tensions between users and providers and between distributed systems and economics. We hope that this will help launch an inter-disciplinary endeavor that eventually contributes to the emergence of cloud computing. Acknowledgments We are grateful to the anonymous reviewers, James Larus, as well as the System Research Group of Microsoft Research Asia for their insightful comments. References [1] [2] Amazon Case Studies. [3] G. Ananthanarayanan and R. H. Katz. Greening the switch. In HotPower, [4] M. Armbrust and A. Fox et al. Above the clouds: A Berkeley view of cloud computing. Technical Report EECS , UC Berkeley, [5] C. Bienia, S. Kumar, J. P. Singh, and K. Li. The PARSEC benchmark suite: Characterization and architectural implications. In PACT, [6] S. Boyd-Wickizer, H. Chen, R. Chen, Y. Mao, F. Kaashoek, R. Morris, A. Pesterev, L. Stein, M. Wu, Y. Dai, Y. Zhang, and Z. Zhang. Corey: An operating system for many cores. In OSDI, [7] R. Buyya, D. Abramson, J. Giddy, and H. Stockinger. Economic models for resource management and scheduling in grid computing. Concurrency and Computation: Practice and Experience, 14(13 15). [8] E. Deelman, G. Singh, M. Livny, B. Berriman, and J. Good. The cost of doing science on the cloud: the Montage example. In SC, [9] S. L. Garfinkel. An evaluation of Amazon s grid computing services: EC2, S3 and SQS. Technical Report TR-08-07, Harvard Univ., [10] Google App Engine Developer Profiles. [11] GridMix. 37b4de9efe070e777023ec171a498/src/benchmarks/gridmix. [12] J. Hamilton /11/28/CostOfPowerInLargeScaleDataCenters.aspx. [13] B. He, W. Fang, Q. Luo, N. K. Govindaraju, and T. Wang. Mars: a mapreduce framework on graphics processors. In PACT, [14] A. Kansal and F. Zhao. Fine-grained energy profiling for power-aware application design. In HotMetrics, [15] J. Katcher. Postmark: a new file system benchmark. Technical Report TR-3022, Network Appliance, [16] Y. Li, B. He, Q. Luo, and Y. Ke. Tree indexing on solid state drives. In Proceedings of VLDB Endowment, [17] N. Mankiw. Principles of economics. South-Western Pub, [18] R. Mason. Simple competitive Internet pricing [19] S. Maxwell. The Price is Wrong: Understanding What Makes a Price Seem Fair and the True Cost of Unfair Pricing. Wiley, [20] C. Molina-Jimenez, N. Cook, and S. Shrivastava. On the feasibility of bilaterally agreed accounting of resource consumption. In Service-Oriented Computing, [21] J. Napper and P. Bientinesi. Can cloud computing reach the top500? In UnConventional high performance computing workshop, [22] M. R. Palankar, A. Iamnitchi, M. Ripeanu, and S. Garfinkel. Amazon S3 for science grids: a viable solution? In DADC, [23] G. Singh, C. Kesselman, and E. Deelman. Adaptive pricing for resource reservations in shared environments. In Grid, [24] Sun VirtualBox. [25] The Green Grid. The Green Grid data center power efficiency metrics: PUE and DCiE. Technical report, [26] E. Walker. Benchmarking Amazon EC2 for high-performance scientific computing. The USENIX Magazine, 33(5), [27] Windows Azure Case Studies.
Lecture on Cloud Computing. Pricing, SLA, business models
Lecture on Cloud Computing Pricing, SLA, business models Slides from: Hadi Salimi Distributed Systems Lab, School of Computer Engineering, Iran University of Science and Technology, Pricing Pricing is
The Hidden Extras. The Pricing Scheme of Cloud Computing. Stephane Rufer
The Hidden Extras The Pricing Scheme of Cloud Computing Stephane Rufer Cloud Computing Hype Cycle Definition Types Architecture Deployment Pricing/Charging in IT Economics of Cloud Computing Pricing Schemes
Network Infrastructure Services CS848 Project
Quality of Service Guarantees for Cloud Services CS848 Project presentation by Alexey Karyakin David R. Cheriton School of Computer Science University of Waterloo March 2010 Outline 1. Performance of cloud
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies
VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop
VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop Page 1 of 11 Introduction Virtual Desktop Infrastructure (VDI) provides customers with a more consistent end-user experience and excellent
Energy Constrained Resource Scheduling for Cloud Environment
Energy Constrained Resource Scheduling for Cloud Environment 1 R.Selvi, 2 S.Russia, 3 V.K.Anitha 1 2 nd Year M.E.(Software Engineering), 2 Assistant Professor Department of IT KSR Institute for Engineering
VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014
VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup
Figure 1. The cloud scales: Amazon EC2 growth [2].
- Chung-Cheng Li and Kuochen Wang Department of Computer Science National Chiao Tung University Hsinchu, Taiwan 300 [email protected], [email protected] Abstract One of the most important issues
Improving MapReduce Performance in Heterogeneous Environments
UC Berkeley Improving MapReduce Performance in Heterogeneous Environments Matei Zaharia, Andy Konwinski, Anthony Joseph, Randy Katz, Ion Stoica University of California at Berkeley Motivation 1. MapReduce
Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820
Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 This white paper discusses the SQL server workload consolidation capabilities of Dell PowerEdge R820 using Virtualization.
Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1
Performance Study Performance Characteristics of and RDM VMware ESX Server 3.0.1 VMware ESX Server offers three choices for managing disk access in a virtual machine VMware Virtual Machine File System
Data Centers and Cloud Computing
Data Centers and Cloud Computing CS377 Guest Lecture Tian Guo 1 Data Centers and Cloud Computing Intro. to Data centers Virtualization Basics Intro. to Cloud Computing Case Study: Amazon EC2 2 Data Centers
Enabling Technologies for Distributed and Cloud Computing
Enabling Technologies for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Multi-core CPUs and Multithreading
Data Centers and Cloud Computing. Data Centers
Data Centers and Cloud Computing Slides courtesy of Tim Wood 1 Data Centers Large server and storage farms 1000s of servers Many TBs or PBs of data Used by Enterprises for server applications Internet
Benchmarking Cassandra on Violin
Technical White Paper Report Technical Report Benchmarking Cassandra on Violin Accelerating Cassandra Performance and Reducing Read Latency With Violin Memory Flash-based Storage Arrays Version 1.0 Abstract
GRIDCENTRIC VMS TECHNOLOGY VDI PERFORMANCE STUDY
GRIDCENTRIC VMS TECHNOLOGY VDI PERFORMANCE STUDY TECHNICAL WHITE PAPER MAY 1 ST, 2012 GRIDCENTRIC S VIRTUAL MEMORY STREAMING (VMS) TECHNOLOGY SIGNIFICANTLY IMPROVES THE COST OF THE CLASSIC VIRTUAL MACHINE
Enabling Technologies for Distributed Computing
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies
Oracle Database Scalability in VMware ESX VMware ESX 3.5
Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises
White Paper. Recording Server Virtualization
White Paper Recording Server Virtualization Prepared by: Mike Sherwood, Senior Solutions Engineer Milestone Systems 23 March 2011 Table of Contents Introduction... 3 Target audience and white paper purpose...
Characterizing Task Usage Shapes in Google s Compute Clusters
Characterizing Task Usage Shapes in Google s Compute Clusters Qi Zhang 1, Joseph L. Hellerstein 2, Raouf Boutaba 1 1 University of Waterloo, 2 Google Inc. Introduction Cloud computing is becoming a key
Optimal Service Pricing for a Cloud Cache
Optimal Service Pricing for a Cloud Cache K.SRAVANTHI Department of Computer Science & Engineering (M.Tech.) Sindura College of Engineering and Technology Ramagundam,Telangana G.LAKSHMI Asst. Professor,
Evaluating HDFS I/O Performance on Virtualized Systems
Evaluating HDFS I/O Performance on Virtualized Systems Xin Tang [email protected] University of Wisconsin-Madison Department of Computer Sciences Abstract Hadoop as a Service (HaaS) has received increasing
Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software
WHITEPAPER Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software SanDisk ZetaScale software unlocks the full benefits of flash for In-Memory Compute and NoSQL applications
Virtualization of the MS Exchange Server Environment
MS Exchange Server Acceleration Maximizing Users in a Virtualized Environment with Flash-Powered Consolidation Allon Cohen, PhD OCZ Technology Group Introduction Microsoft (MS) Exchange Server is one of
Towards an understanding of oversubscription in cloud
IBM Research Towards an understanding of oversubscription in cloud Salman A. Baset, Long Wang, Chunqiang Tang [email protected] IBM T. J. Watson Research Center Hawthorne, NY Outline Oversubscription
IT@Intel. Comparing Multi-Core Processors for Server Virtualization
White Paper Intel Information Technology Computer Manufacturing Server Virtualization Comparing Multi-Core Processors for Server Virtualization Intel IT tested servers based on select Intel multi-core
Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com
Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
Analysis and Optimization of Massive Data Processing on High Performance Computing Architecture
Analysis and Optimization of Massive Data Processing on High Performance Computing Architecture He Huang, Shanshan Li, Xiaodong Yi, Feng Zhang, Xiangke Liao and Pan Dong School of Computer Science National
MS EXCHANGE SERVER ACCELERATION IN VMWARE ENVIRONMENTS WITH SANRAD VXL
MS EXCHANGE SERVER ACCELERATION IN VMWARE ENVIRONMENTS WITH SANRAD VXL Dr. Allon Cohen Eli Ben Namer [email protected] 1 EXECUTIVE SUMMARY SANRAD VXL provides enterprise class acceleration for virtualized
Data Centers and Cloud Computing. Data Centers
Data Centers and Cloud Computing Intro. to Data centers Virtualization Basics Intro. to Cloud Computing 1 Data Centers Large server and storage farms 1000s of servers Many TBs or PBs of data Used by Enterprises
PARALLELS CLOUD SERVER
PARALLELS CLOUD SERVER Performance and Scalability 1 Table of Contents Executive Summary... Error! Bookmark not defined. LAMP Stack Performance Evaluation... Error! Bookmark not defined. Background...
Virtual Desktops Security Test Report
Virtual Desktops Security Test Report A test commissioned by Kaspersky Lab and performed by AV-TEST GmbH Date of the report: May 19 th, 214 Executive Summary AV-TEST performed a comparative review (January
MS Exchange Server Acceleration
White Paper MS Exchange Server Acceleration Using virtualization to dramatically maximize user experience for Microsoft Exchange Server Allon Cohen, PhD Scott Harlin OCZ Storage Solutions, Inc. A Toshiba
CS 695 Topics in Virtualization and Cloud Computing. Introduction
CS 695 Topics in Virtualization and Cloud Computing Introduction This class What does virtualization and cloud computing mean? 2 Cloud Computing The in-vogue term Everyone including his/her dog want something
Heterogeneous Workload Consolidation for Efficient Management of Data Centers in Cloud Computing
Heterogeneous Workload Consolidation for Efficient Management of Data Centers in Cloud Computing Deep Mann ME (Software Engineering) Computer Science and Engineering Department Thapar University Patiala-147004
Violin Memory 7300 Flash Storage Platform Supports Multiple Primary Storage Workloads
Violin Memory 7300 Flash Storage Platform Supports Multiple Primary Storage Workloads Web server, SQL Server OLTP, Exchange Jetstress, and SharePoint Workloads Can Run Simultaneously on One Violin Memory
How To Make A Backup System More Efficient
Identifying the Hidden Risk of Data De-duplication: How the HYDRAstor Solution Proactively Solves the Problem October, 2006 Introduction Data de-duplication has recently gained significant industry attention,
Data Centers and Cloud Computing. Data Centers. MGHPCC Data Center. Inside a Data Center
Data Centers and Cloud Computing Intro. to Data centers Virtualization Basics Intro. to Cloud Computing Data Centers Large server and storage farms 1000s of servers Many TBs or PBs of data Used by Enterprises
How swift is your Swift? Ning Zhang, OpenStack Engineer at Zmanda Chander Kant, CEO at Zmanda
How swift is your Swift? Ning Zhang, OpenStack Engineer at Zmanda Chander Kant, CEO at Zmanda 1 Outline Build a cost-efficient Swift cluster with expected performance Background & Problem Solution Experiments
Client-aware Cloud Storage
Client-aware Cloud Storage Feng Chen Computer Science & Engineering Louisiana State University Michael Mesnier Circuits & Systems Research Intel Labs Scott Hahn Circuits & Systems Research Intel Labs Cloud
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.
Benchmarking Hadoop & HBase on Violin
Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages
SQL Server Virtualization
The Essential Guide to SQL Server Virtualization S p o n s o r e d b y Virtualization in the Enterprise Today most organizations understand the importance of implementing virtualization. Virtualization
Virtuoso and Database Scalability
Virtuoso and Database Scalability By Orri Erling Table of Contents Abstract Metrics Results Transaction Throughput Initializing 40 warehouses Serial Read Test Conditions Analysis Working Set Effect of
AN IMPLEMENTATION OF E- LEARNING SYSTEM IN PRIVATE CLOUD
AN IMPLEMENTATION OF E- LEARNING SYSTEM IN PRIVATE CLOUD M. Lawanya Shri 1, Dr. S. Subha 2 1 Assistant Professor,School of Information Technology and Engineering, Vellore Institute of Technology, Vellore-632014
HP SN1000E 16 Gb Fibre Channel HBA Evaluation
HP SN1000E 16 Gb Fibre Channel HBA Evaluation Evaluation report prepared under contract with Emulex Executive Summary The computing industry is experiencing an increasing demand for storage performance
Run-time Resource Management in SOA Virtualized Environments. Danilo Ardagna, Raffaela Mirandola, Marco Trubian, Li Zhang
Run-time Resource Management in SOA Virtualized Environments Danilo Ardagna, Raffaela Mirandola, Marco Trubian, Li Zhang Amsterdam, August 25 2009 SOI Run-time Management 2 SOI=SOA + virtualization Goal:
Comparison of Windows IaaS Environments
Comparison of Windows IaaS Environments Comparison of Amazon Web Services, Expedient, Microsoft, and Rackspace Public Clouds January 5, 215 TABLE OF CONTENTS Executive Summary 2 vcpu Performance Summary
How Liferay Is Improving Quality Using Hundreds of Jenkins Servers
How Liferay Is Improving Quality Using Hundreds of Jenkins Servers James Min Sr. Consultant, Liferay, Inc. liferay.com Liferay Background Open Source Portal & Collaboration Java Platform 250 employees
Windows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described
Keywords: Cloudsim, MIPS, Gridlet, Virtual machine, Data center, Simulation, SaaS, PaaS, IaaS, VM. Introduction
Vol. 3 Issue 1, January-2014, pp: (1-5), Impact Factor: 1.252, Available online at: www.erpublications.com Performance evaluation of cloud application with constant data center configuration and variable
Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array
Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array Evaluation report prepared under contract with Lenovo Executive Summary Even with the price of flash
CHAPTER 2 THEORETICAL FOUNDATION
CHAPTER 2 THEORETICAL FOUNDATION 2.1 Theoretical Foundation Cloud computing has become the recent trends in nowadays computing technology world. In order to understand the concept of cloud, people should
Emerging Technology for the Next Decade
Emerging Technology for the Next Decade Cloud Computing Keynote Presented by Charles Liang, President & CEO Super Micro Computer, Inc. What is Cloud Computing? Cloud computing is Internet-based computing,
Mark Bennett. Search and the Virtual Machine
Mark Bennett Search and the Virtual Machine Agenda Intro / Business Drivers What to do with Search + Virtual What Makes Search Fast (or Slow!) Virtual Platforms Test Results Trends / Wrap Up / Q & A Business
OTM in the Cloud. Ryan Haney
OTM in the Cloud Ryan Haney The Cloud The Cloud is a set of services and technologies that delivers real-time and ondemand computing resources Software as a Service (SaaS) delivers preconfigured applications,
Intel Solid-State Drives Increase Productivity of Product Design and Simulation
WHITE PAPER Intel Solid-State Drives Increase Productivity of Product Design and Simulation Intel Solid-State Drives Increase Productivity of Product Design and Simulation A study of how Intel Solid-State
Deep Dive: Maximizing EC2 & EBS Performance
Deep Dive: Maximizing EC2 & EBS Performance Tom Maddox, Solutions Architect 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved What we ll cover Amazon EBS overview Volumes Snapshots
Tableau Server 7.0 scalability
Tableau Server 7.0 scalability February 2012 p2 Executive summary In January 2012, we performed scalability tests on Tableau Server to help our customers plan for large deployments. We tested three different
PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE
PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE Sudha M 1, Harish G M 2, Nandan A 3, Usha J 4 1 Department of MCA, R V College of Engineering, Bangalore : 560059, India [email protected] 2 Department
Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro
Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro Whitepaper What s wrong with today s clouds? Compute and storage virtualization has enabled
Using Synology SSD Technology to Enhance System Performance Synology Inc.
Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_WP_ 20121112 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges... 3 SSD
Windows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...
Accelerating Server Storage Performance on Lenovo ThinkServer
Accelerating Server Storage Performance on Lenovo ThinkServer Lenovo Enterprise Product Group April 214 Copyright Lenovo 214 LENOVO PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER
Datacenters and Cloud Computing. Jia Rao Assistant Professor in CS http://cs.uccs.edu/~jrao/cs5540/spring2014/index.html
Datacenters and Cloud Computing Jia Rao Assistant Professor in CS http://cs.uccs.edu/~jrao/cs5540/spring2014/index.html What is Cloud Computing? A model for enabling ubiquitous, convenient, ondemand network
Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009
Performance Study Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Introduction With more and more mission critical networking intensive workloads being virtualized
Benchmark Hadoop and Mars: MapReduce on cluster versus on GPU
Benchmark Hadoop and Mars: MapReduce on cluster versus on GPU Heshan Li, Shaopeng Wang The Johns Hopkins University 3400 N. Charles Street Baltimore, Maryland 21218 {heshanli, shaopeng}@cs.jhu.edu 1 Overview
International Journal of Advance Research in Computer Science and Management Studies
Volume 3, Issue 6, June 2015 ISSN: 2321 7782 (Online) International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online
Intel Cloud Builders Guide to Cloud Design and Deployment on Intel Platforms
Intel Cloud Builders Guide Intel Xeon Processor-based Servers RES Virtual Desktop Extender Intel Cloud Builders Guide to Cloud Design and Deployment on Intel Platforms Client Aware Cloud with RES Virtual
GraySort on Apache Spark by Databricks
GraySort on Apache Spark by Databricks Reynold Xin, Parviz Deyhim, Ali Ghodsi, Xiangrui Meng, Matei Zaharia Databricks Inc. Apache Spark Sorting in Spark Overview Sorting Within a Partition Range Partitioner
Amadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator
WHITE PAPER Amadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com SAS 9 Preferred Implementation Partner tests a single Fusion
DELL. Virtual Desktop Infrastructure Study END-TO-END COMPUTING. Dell Enterprise Solutions Engineering
DELL Virtual Desktop Infrastructure Study END-TO-END COMPUTING Dell Enterprise Solutions Engineering 1 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL
Understanding the Economics of Flash Storage
Understanding the Economics of Flash Storage By James Green, vexpert Virtualization Consultant and Scott D. Lowe, vexpert Co-Founder, ActualTech Media February, 2015 Table of Contents Table of Contents...
Managing Capacity Using VMware vcenter CapacityIQ TECHNICAL WHITE PAPER
Managing Capacity Using VMware vcenter CapacityIQ TECHNICAL WHITE PAPER Table of Contents Capacity Management Overview.... 3 CapacityIQ Information Collection.... 3 CapacityIQ Performance Metrics.... 4
Monitoring Databases on VMware
Monitoring Databases on VMware Ensure Optimum Performance with the Correct Metrics By Dean Richards, Manager, Sales Engineering Confio Software 4772 Walnut Street, Suite 100 Boulder, CO 80301 www.confio.com
AlphaTrust PRONTO - Hardware Requirements
AlphaTrust PRONTO - Hardware Requirements 1 / 9 Table of contents Server System and Hardware Requirements... 3 System Requirements for PRONTO Enterprise Platform Software... 5 System Requirements for Web
Using Synology SSD Technology to Enhance System Performance Synology Inc.
Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_SSD_Cache_WP_ 20140512 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges...
Integrated Grid Solutions. and Greenplum
EMC Perspective Integrated Grid Solutions from SAS, EMC Isilon and Greenplum Introduction Intensifying competitive pressure and vast growth in the capabilities of analytic computing platforms are driving
Virtualization. Dr. Yingwu Zhu
Virtualization Dr. Yingwu Zhu What is virtualization? Virtualization allows one computer to do the job of multiple computers. Virtual environments let one computer host multiple operating systems at the
Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage
Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage Technical white paper Table of contents Executive summary... 2 Introduction... 2 Test methodology... 3
Tableau Server Scalability Explained
Tableau Server Scalability Explained Author: Neelesh Kamkolkar Tableau Software July 2013 p2 Executive Summary In March 2013, we ran scalability tests to understand the scalability of Tableau 8.0. We wanted
Getting More Performance and Efficiency in the Application Delivery Network
SOLUTION BRIEF Intel Xeon Processor E5-2600 v2 Product Family Intel Solid-State Drives (Intel SSD) F5* Networks Delivery Controllers (ADCs) Networking and Communications Getting More Performance and Efficiency
Putchong Uthayopas, Kasetsart University
Putchong Uthayopas, Kasetsart University Introduction Cloud Computing Explained Cloud Application and Services Moving to the Cloud Trends and Technology Legend: Cluster computing, Grid computing, Cloud
IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org
IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator
Can High-Performance Interconnects Benefit Memcached and Hadoop?
Can High-Performance Interconnects Benefit Memcached and Hadoop? D. K. Panda and Sayantan Sur Network-Based Computing Laboratory Department of Computer Science and Engineering The Ohio State University,
SQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V
SQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V White Paper July 2011 Contents Executive Summary... 3 Introduction... 3 Audience and Scope... 4 Today s Challenges...
8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments
8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments QLogic 8Gb Adapter Outperforms Emulex QLogic Offers Best Performance and Scalability in Hyper-V Environments Key Findings The QLogic
How To Store Data On An Ocora Nosql Database On A Flash Memory Device On A Microsoft Flash Memory 2 (Iomemory)
WHITE PAPER Oracle NoSQL Database and SanDisk Offer Cost-Effective Extreme Performance for Big Data 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Abstract... 3 What Is Big Data?...
Cloud Computing through Virtualization and HPC technologies
Cloud Computing through Virtualization and HPC technologies William Lu, Ph.D. 1 Agenda Cloud Computing & HPC A Case of HPC Implementation Application Performance in VM Summary 2 Cloud Computing & HPC HPC
Tandberg Data AccuVault RDX
Tandberg Data AccuVault RDX Binary Testing conducts an independent evaluation and performance test of Tandberg Data s latest small business backup appliance. Data backup is essential to their survival
Maximum performance, minimal risk for data warehousing
SYSTEM X SERVERS SOLUTION BRIEF Maximum performance, minimal risk for data warehousing Microsoft Data Warehouse Fast Track for SQL Server 2014 on System x3850 X6 (95TB) The rapid growth of technology has
Cloud Computing. Alex Crawford Ben Johnstone
Cloud Computing Alex Crawford Ben Johnstone Overview What is cloud computing? Amazon EC2 Performance Conclusions What is the Cloud? A large cluster of machines o Economies of scale [1] Customers use a
Understanding Data Locality in VMware Virtual SAN
Understanding Data Locality in VMware Virtual SAN July 2014 Edition T E C H N I C A L M A R K E T I N G D O C U M E N T A T I O N Table of Contents Introduction... 2 Virtual SAN Design Goals... 3 Data
The Comprehensive Performance Rating for Hadoop Clusters on Cloud Computing Platform
The Comprehensive Performance Rating for Hadoop Clusters on Cloud Computing Platform Fong-Hao Liu, Ya-Ruei Liou, Hsiang-Fu Lo, Ko-Chin Chang, and Wei-Tsong Lee Abstract Virtualization platform solutions
Performance and scalability of a large OLTP workload
Performance and scalability of a large OLTP workload ii Performance and scalability of a large OLTP workload Contents Performance and scalability of a large OLTP workload with DB2 9 for System z on Linux..............
Microsoft Windows Server 2003 with Internet Information Services (IIS) 6.0 vs. Linux Competitive Web Server Performance Comparison
April 23 11 Aviation Parkway, Suite 4 Morrisville, NC 2756 919-38-28 Fax 919-38-2899 32 B Lakeside Drive Foster City, CA 9444 65-513-8 Fax 65-513-899 www.veritest.com [email protected] Microsoft Windows
IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud
IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud February 25, 2014 1 Agenda v Mapping clients needs to cloud technologies v Addressing your pain
