I/O Characterization of Big Data Workloads in Data Centers
|
|
|
- Sophia Boone
- 10 years ago
- Views:
Transcription
1 I/O Characterization of Big Data Workloads in Data Centers Fengfeng Pan 1 2 Yinliang Yue 1 Jin Xiong 1 Daxiang Hao 1 1 Research Center of Advanced Computer Syste, Institute of Computing Technology, Chinese Academy of Sciences, China 2 University of Chinese Academy of Sciences, Beijing, China [email protected] {yueyinliang, xiongjin, haodaxiang}@ict.ac.cn Abstract As the amount of data explodes rapidly, more and more organizations tend to use data centers to make effective decisions and gain a competitive edge. Big data applications have gradually dominated the data centers workloads, and hence it has been increasingly important to understand their behaviour in order to further improve the performance of data centers. Due to the constantly increased gap between I/O devices and CPUs, I/O performance dominates the overall system performance, so characterizing I/O behaviour of big data workloads is important and imperative. In this paper, we select four typical big data workloads in broader areas from the BigDataBench which is a big data bench-mark suite from internet services. They are Aggregation, TeraSort, K- means and PageRank. We conduct detailed deep analysis of their I/O characteristics, including disk read/write bandwidth, I/O devices utilization, average waiting time of I/O requests, and average size of I/O requests, which act as a guide to design highperformance, low-power and cost-aware big data storage syste. I/O characterization; Big Data Workloads; Data Cen- Keywords ters; 1. Introduction In recent years, big data workloads [] are more and more popular and play an important role in enterprises business. There are some popular and typical applications, such as TeraSort, SQL operations, PageRank and K-means. Specifically, TeraSort is widely used for page or document ranking; SQL operations, such as join, aggregation and select, are used for log analysis and information extraction; PageRank is widely used in search engine field; and K- means is usually used as electronic commerce algorithm. These big data workloads run in data centers, and their performance is critical. The factors which affect their performance include: algorith, hardware including node, interconnection and s- torage, and software such as programming model and file syste. This is the reason for why several efforts have been made to analyse the impact of these factors on the syste [][19][22]. However, data access to persistent storage usually accounts for a large part of application time because of the ever-increasing performance gap between CPU and I/O devices. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. BPOE-4, March 1, 214,Salt Lake City, Utah, USA. With the rapid growth of data volume in many enterprises and a strong desire for processing and storing data efficiently, a new generation of big data storage system is urgently required. In order to achieve this goal, a deep understanding of big data workloads present in data centers is necessary to guide the big data system s design and tuning. Some studies have been conducted to explore the computing characteristics of big data workloads [21][17], meanwhile, lots of work also have been done to depict the storage I/O characteristics of enterprise storages [5][8]. However, to the best of our knowledge, none of the existing research has understood the I/O characteristics of big data workloads, which is much more important in the current big data era. So understanding the characteristics of these applications is the key to better design the storage system and optimize their performance and energy efficiency. The main metrics used for the I/O characterization in this paper are listed below. First, disk read or write bandwidth (MB/s): which indicates the amount of data read from or write to the storage syste per unit time. Second, I/O devices utilization (percent): which shows percentage of CPU time during which I/O requests were issued to the device. Third, average waiting time of I/O requests (seconds): the time spent in queue. Fourth, average size of I/O requests (the number of sectors, and the size of sector is 512 bytes): the average size of the re-quests that were issued to the device. In this paper, we choose four typical big data workloads as mentioned above, because they have been widely used in popular application domains [][1], such as search engine, social networks and electronic commerce. Detailed information about I/O metrics and workloads are shown in Section 3.2. Through the detailed analysis, we get the following four observations: The change of the number of task slots has no effects on the four I/O metrics, but increasing the number of task slots appropriately can accelerate the process of application execution; Increasing memory can reduce the number of I/O requests, alleviate the pressure of disk read/write, and effectively improve the I/O performance when the data size is large; The compression of intermediate data mainly affects the MapReduce I/O performance and has little influence on HDFS I/O performance. However, compression consumes some CPU resource which may influence the job s execution time; The I/O pattern of HDFS and MapReduce are different, namely, HDFS s I/O pattern is large sequential access and MapReduce s I/O pattern is small random access, so when configuring storage syste, we should take several factors into account, such as the number of devices and the types of devices. The rest of the paper is organized as follows: Section 2 discusses related work. Section 3 describes the experimental methodology. 1
2 Section 4 shows the experimental results. Section 5 briefly brings up future work and concludes this paper. 2. Related Work Workloads characterization studies play a significant role in detecting proble and performance bottlenecks of syste. Many workloads have been extensively studied in the past, including enterprise storage syste [1][7][24], web server [16][23], HPC cluster [15] and network syste [13]. 2.1 I/O Characteristics of Storage workloads There have been a number of papers about the I/O characteristics of storage workloads [16][12][18]. Kozyrakis et al. [18] presented a system-wide characterization of large-scale online services provided by Microsoft and showed the implications of such workloads on data-center server design. Kavalanekar et al. [16] characterized large online services for storage system configuration and performance modeling. It contains a set of characteristics, including block-level statistics, multiparameter distributions and rankings of file access frequencies. Similarly, Delemitrou et al. [12] presented a concise statistical model which accurately captures the I/O access pattern of largescale applications including their spatial locality, inter-arrival times and type of accesses. 2.2 Characteristics of big data workloads Big data workloads have been studied in recent years at various levels, such as job characterization [21][17][9], storage syte [5][8]. Kavulya et al. [17] characterized resource utilization patterns, job patterns, and source of failure. This work focused on predicting job completion time and found the performance proble. Similarly, Ren et al. [21] focused on not only job characterization and task characterization, but also resource utilization on a Hadoop cluster, including CPU, Disk and Network. However, the above researches on big data workloads focused on job level, but not on the storage level. Some studies have provided us with some metrics about data access pattern in MapReduce scenarios [6][14][4], bur these metrics are limited, such as block age at time of access [14] and file popularity [6][4]. Abad et al. [5] conducted a detailed analysis about some HDFS characterization, such as file popularity, temporal locality, request arrival patterns, and then figure out the data access pattern of two types of big data workloads, namely batch and interactive query workloads. But this work concentrates on HDFS s I/O characterization, does not study the intermediate data, and also this work only involves two types of workloads. In this paper, we focus on I/O characteristics of big data workloads, the difference between our work and the previous work is: First, we focus on the I/O behaviour of individual workload and choose four typical workloads in broader areas, including search engine, social network, e-commerce, which are popular in the current big data era; Second, we analyze the I/O behavior of both HDFS and MapReduce intermediate data from different I/O characteristics, including disk read/write bandwidth, I/O devices utilization, average waiting time of I/O requests, and average size of I/O requests. 3. Experimental Methodology This section firstly describes our experiment platform, and then presents workloads used in this paper. 3.1 Platform We use an 11-node (one master and ten slaves) Hadoop cluster to run the four typical big data workloads. The nodes in our Hadoop cluster are connected through 1 Gb ethernet network. Each node has two Intel Xeon E5645 (Westmere) processors, 32 GB memory and 7 disks(1tb). A Xeon E5645 processor includes six physical out-of-order cores with speculative pipelines. Table 1 and 2 shows the detailed configuration information. Table 1: The detailed hardware configuration information CPU Type Intel R Xeon E5645 # Cores 6 [email protected] # threads 12 threads 32 GB, DDR3 Disk 7 disks (one disk for system, three disks for HDFS data, the other three disks for MapReduce intermediate data), Disk Model Seagate: ST1NM11, Capacity: 1TB, Rotational Speed: 72 RPM, Avg. Seek/Rotational Time: 8.5/4.2, Sustained Transfer Rate: 15MB/s Table 2: The detailed software configuration information Hadoop 1..4 JDK 1.6. Hive.11. OS Distribution and Centos 5.5 with the Linux Linux kernel kernel TeraSort BigDataBench2.1 [2] SQL operations BigDataBench2.1 PageRank BigDataBench2.1 K-means BigDataBench Workloads and Statistics We choose four popular and typical big data workloads from BigDataBench []. BigDataBench is a big data benchmark suite from internet services and it provides several big data generation tools to generate various types and volumes of big data from smallscale real-world data while preserving their characteristics. Table 3 shows the description of the workloads, which is characterized in this paper. Micro Benchmarks: TeraSort TeraSort is a standard benchmark created by Jim Gray. In this paper, we use the data generation tools of BigDataBench2.1 to generate the text data as its input data. According to a previous study [11], TeraSort is an I/O-bound workload; Data Analytics Benchmarks: Hive Query The Hive Query Workload is developed based on the study [2]. It describes Hive queries (e.g. Aggregation and Join) performing the typical OLAP queries. Its input data is generated by the BigDataBench2.1 s data generation tools. In this paper, we select the Aggregation operation which is a CPU-bound workload [11]; Machine Learning Benchmarks: K-means K-means is a well-known clustering algorithm for knowledge discovery and data mining. It has two stages, namely iteration (CPU-bound) [11] and clustering(i/o-bound) [11]. We use the 2
3 4. graph data which is provided by the BigDataBench2.1 as Kmeans input data; In this section, we describe HDFS/MapReduce I/O characteristics from four metrics, namely, disk read/write bandwidth, I/O devices utilization, average waiting time of the I/O requests, and average size of the I/O requests. Through the comparison of each workloads, we can obtain the I/O characterization of HDFS and MapReduce respectively, and also the difference between HDFS and MapReudce. In addition, different Hadoop configurations can influence the workloads execution. So in this paper, we select three factors and analyse their impact on the I/O characterization of these workloads. The three factors are as follows. First, the number of task slots, including map slots and reduce slots. In Hadoop, computing resource is represented by slot, there are two types of slot: map task slot and reduce task slot. Here computing resource refers to CPU. Second, the amount of physical memory of each node. As we know that memory plays an important role in I/O performance, so memory is an important factor which affects the I/O behaviour of workloads. So, the second factor we focus on is the relationship be-tween memory and I/O characterization. Third, whether the intermediate data is compressed or not. Compression involves two types of resources: CPU and I/O. what s the influence on the I/O behaviour of workloads when CPU and I/O resources both change. So, the final factor we focus on is the relationship between compression and I/O characterization. PageRank is a link analysis algorithm and it assigns a numerical weighting to each element of a hyperlinked set of documents, such as the World Wide Web, with the purpose of "measuring" its relative importance within the set. BigDataBench also has an implementation of PageRank on Hadoop which uses Google web graph data to generate PageRank s input data. PageRank on Hadoop is a CPU-bound workload [11]; Table 3: The description of the workloads Performance bottleneck Scenarios TeraSort (TS) I/O bound Aggregation (AGG) CPU bound K-means (KM) CPU bound in iteration; I/O bound in clustering CPU-bound Page ranking; document ranking Log analysis; information extraction Clustering and Classification Search engine 1TB 512GB GB Disk Read/Write Bandwidth Task slots 1 Table 4: Notation of I/O characterization AGG_1_8 AGG_2_16 TS_1_8 TS_2_16 KM_1_8 KM_2_16 PR_1_8 PR_2_16 %util await () svctm () avgrq-sz number sectors) (the of Notes Description 5 The number of megabytes read from or written to the device per second Percentage of CPU time during which I/O requests were issued to the device The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them The average service time for I/O requests that were issued to the device The average size of the requests that were issued to the device. And the size of sectors is 512B Disk Read or Write Bandwitdh average waiting time of I/O request = await - svctm (c) HDFS Write Read AGG_1_8 AGG_2_16 TS_1_8 TS_2_16 KM_1_8 KM_2_16 PR_1_8 PR_2_16 AGG_1_8 AGG_2_16 TS_1_8 TS_2_16 KM_1_8 KM_2_16 PR_1_8 PR_2_16 Read Disk utilization I/O characterization rmb/s and wmb/s 1 1 AGG_1_8 AGG_2_16 TS_1_8 TS_2_16 KM_1_8 KM_2_16 PR_1_8 PR_2_16 PageRank (PR) Input Data size 1TB Web Search Benchmarks: PageRank Workloads Results (d) MapReduce Write Figure 1: The effects of the number of task slots on the Disk Read/Write Bandwidth in HDFS and MapReduce respectively Figure 1 shows the effects of the number of task slots on the disk read/write bandwidth in HDFS and MapReduce respectively. In these experiments, each node configured 16GB memory and the intermediate data is compressed. "1_8", "2_16" in the figures mean the number of map task slots and reduce task slots respectively. From Figure 1 we can get the following two conclusions. First, when the number of task slots changes, there is barely any influence on the disk read/write bandwidth in both scenarios for every workload. Second, the variation of disk read/write bandwidth of different workloads in both scenarios are disparate because the data volume of each workload in different phases of execution are not the same. average size of I/O request Iostat [3] is a well-used monitor tool used to collect and show various system statistics, such as CPU times, memory usage, as well as disk I/O statistics. In this paper, we mainly focus on the disk-level I/O behaviour of the workloads, and we extract information from iostat s report, and the metrics which we focus on are shown in Table 4. 3
4 1 K-means PageRank AGG_1_8 AGG_2_16 TS_1_8 TS_2_16 KM_1_8 KM_2_16 PR_1_8 PR_2_ Read (c) HDFS Write Write Figure 3: The effects of compression on the Disk Read/Write Bandwidth in MapReduce AGG_1_8 AGG_2_16 TS_1_8 TS_2_16 KM_1_8 KM_2_16 PR_1_8 PR_2_ AGG_1_8 TS_1_8 KM_1_8 PR_1_ AGG_2_16 TS_2_16 KM_2_16 PR_2_ Figure 4: The effects of the number of task slots on the Disk Utilization in 5 (a) MapReduce Read Read value TeraSort Peak (MB/s) Aggregation Configuration Workloads 1 Table 5: The Peak HDFS Disk Read Bandwidth of these workloads HDFS and MapReduce respectively. Table 6: the HDFS ratio AGG TS KM PR (d) MapReduce Write Figure 2: The effects of memory on the Disk Read/Write Bandwidth in HDFS and MapReduce respectively >9%uitl 22.6% 5.2%.4%.5% >95%uitl 16.4% 3.8%.3%.3% >99%util 9.8% 2.4%.2%.2% This observation is also applied to the peak disk read/write bandwidth of these workloads in HDFS and MapReduce. Table 5 shows the peak HDFS disk read bandwidth of these workloads. In a word, there is little effect on disk read/write bandwidth when the number of task slots changes. However, configuring the number of task slots appropriately can reduce the execution time of workloads, so we should take it into account when workloads run. uration is 1_8. "off", "on" in the figures mean whether the intermediate data is compressed or not. As Figure 3 shows, due to the reduction of intermediate data volume with compression, the disk read/write bandwidth increase in MapReduce. In addition, compression has little impact on the HDFS s disk read/write bandwidth, so we do not present the result Figure 2 displays the effects of the memory size on the disk read/write bandwidth in HDFS and MapReduce respectively. In these experiments, task slots configuration on each node is 1_8 and the intermediate data is not compressed. "16G", "32G" in the figures mean the memory size of node. As Figure 2 shows, the influence of the memory size on disk read/write Bandwidth depends on the data volume. There is a growth of disk read bandwidth in HDFS when memory increases as shown in figure 2(a) due to the large amount of raw data, but after handling and processing the raw data, the disk write bandwidth of each workloads in HDFS are different because of the final data volume. When the final data volume is small, memory has no effects on the disk write bandwidth, such as K-means, as shown in figure 2(c). This result can also be reflected in MapReduce Disk Utilization Task slots Figure 4 depicts the effects of the number of task slots on the disk utilization in HDFS and MapReduce respectively. From Figure 4 we can get the following two conclusions. First, the trends of workloads in disk utilization are the same when the number of task slots changes, i.e. the number of task slots has little impact on the disk utilization in both scenarios. Second, workloads have different behaviour about disk utilization in both scenarios. From the Table 6 and Table 7, the HDFS disk utilization of Aggregation is higher than the others, so Aggregation HDFS disk may be the bottleneck. Similarly, the MapReduce disk utilization of TeraSort is higher than the others; From the figure 4(b), the MapReduce disk utilization of workloads is 5% or less at their most of execution time, so the disks are not busy, except TeraSort because of the large amount of TeraSort s intermediate data. Compression Figure 3 exhibits the effects of intermediate data compression on the disk read/write bandwidth in MapReduce. In these experiments, each node configured 32GB memory and task slots config- Figure 5 depicts the effects of the memory size on the disk utilization in HDFS and MapReduce respectively. As Figure 5(a) 4
5 AGG TS KM PR >95%uitl 15.6%.1% >99%util 5.5%.1% >9%uitl 27.2%.1% 6 Table 7: the MapReduce ratio 2 2 shows, increasing memory size has no impact on the disk utilization in HDFS. However, from Figure 5(b) we can see that there is a difference between HDFS and MapReduce. In MapReduce, the disk utilization of Aggregation and K-means has no changes when memory size changes because the devices are not busy before memory changes. However, the disk utilization of TeraSort and PageRank is reduced when memory increases as shown in figure 5(b). So, increasing memory size can help reduce the number of I/O requests and ease the bottleneck of disk Figure 7: The effects of the number of task slots on the Disk waiting time of I/O request in HDFS and MapReduce respectively Figure 8: The effects of memory on the Disk waiting time of I/O requests in HDFS and MapReduce respectively. Average waiting time of I/O request Task slots Figure 7 shows the effects of the number of task slots on the disk waiting time of I/O requests in HDFS and MapReduce respectively. As Figure shows, when the number of task slots increases, the disk average waiting time of I/O requests does not change Compression From Figure 6(a), we can know that when intermediate data is compressed, the HDFS s disk utilization of TeraSort and Aggregation is high, except the other two workloads. As Figure 6(b) shows, there is no influence on intermediate data s disk utilization of TeraSort, Aggregation and K-means, except PageRank Figure 8 shows the effects of on the disk waiting time of I/O requests in HDFS and MapReduce respectively Figure 9: The effects of compression on the Disk waiting time of I/O requests in HDFS and MapReduce respectively From Figure 8, we can learn that the disk average waiting time of I/O requests of workloads varies with different memory size, in other words, the memory size has an impact on the disk waiting time of I/O requests and the MapReduce disk waiting time of I/O request is larger than the HDFS s. Figure 5: The effects of memory on the Disk Utilization in HDFS and MapReduce respectively Compression Figure 9 depicts the effects of compression on the disk waiting time of I/O requests in HDFS and MapReduce respectively. From Figure 9, we can see that the disk average waiting time of I/O requests remains unchanged in HDFS because HDFS s data is not compressed, however, due to the reduction of intermediate data volume with compression, the disk waiting time of I/O request in MapReduce is decreased. And the MapReduce disk waiting time is larger than the HDFS s because of their different I/O mode in access pattern, i.e. HDFS s access pattern is domated by large sequential accesses, while MapReduce s access pattern is dominated by smaller random access. This result can be seen in Figure Figure 6: The effects of compression on the Disk Utilization in HDFS and MapReduce respectively. 5
6 It is seen from figure that as the intermediate data is compressed, the disk average size of I/O requests is decreased, and the percentage of reduction varies with the types of workloads due to the intermediate data volume. As Figure 12 shows, there is little influence on the disk average size of I/O request when the the intermediate data volume is small, such as Aggregation and K-means. In addition, whether the intermediate data is compressed or not has no impact on the HDFS s disk average size of I/O requests, so we do not present the result. Figure 1: The effects of the number of task slots on the Disk average size of I/O request in HDFS and MapReduce respectively Figure 11: The effects of memory on the Disk average size of I/O request in HDFS and MapReduce respectively Figure 12: The effects of compression on the Disk average size of I/O request in MapReduce. 4.4 Average size of I/O requests Task slots Figure 1 depicts the effects of the number of task slots on the disk average size of I/O request in HDFS and MapReduce respectively. As the task slots is a kind of computing resource, there is little impact on the disk average size of I/O requests when the number of task slots changes from the figures. Also, the average size of HDFS I/O requests is larger than the MapReduce s because they have different I/O mode in I/O granularity. In other words, HDFS s I/O granularity is larger than the MapReduce s. The same result can be seen in Figure 11 which reflects the effects of memory on the disk average size of I/O requests. However, average size of I/O requests in figure 11(b) is larger than the figure 1(b) because of the influence of compression which is reflected in Figure Compression Figure 12 displays the effects of compression on the disk average size of I/O requests in MapReduce. 5. Conclusion and Future work In this paper, we have presented a study of I/O characterization of big data workloads. These workloads are typical, which are representative and common in search engine, social networks and electronic commerce. In contrast with previous work, we take into account disk read/write bandwidth, average waiting time of I/O requests, average size of I/O requests and storage device utilization, which are important for big data workloads. Some observations and implications are concluded: 1) task slots has little effects on the four I/O metrics, but increasing the number of task slots can accelerate the process of application execution; 2) increasing memory can reduce the number of I/O requests, alleviate the pressure of disk read/write, and effectively improve the I/O performance when the data size is large; 3) the compression of intermediate data mainly affects the MapReduce I/O performance and has little influence on HDFS I/O performance. However, compression consumes some CPU resource which may influence the job s execution time; 4) HDFS data and MapReduce intermediate data have different I/O mode, which leads us to configuring their own storage syste according to their I/O mode. In the future, we plan to combine two factors a low-level description of physical resource, including CPU, and Disk, and the high-level functional composition of big data workloads to reveal the major source of I/O demand in these workloads. Acknowledgments This paper is supported by National Science Foundation of China under grants no , , and , and Huawei Research Program YB We would like to thank Zijian Ming and Professor Lei Wang for their help to run the benchmarks. References [1] [2] [3] [4] C. L. Abad, Y. Lu, and R. H. Campbell. Dare: Adaptive data replication for efficient cluster scheduling. In CLUSTER, 211. [5] C. L. Abad, N. Roberts, Y. Lu, and R. H. Campbell. A storage-centric analysis of mapreduce workloads: File popularity, temporal locality and arrival patterns. In IISWC, 212. [6] G. Ananthanarayanan, S. Agarwal, S. Kandula, A. Greenberg, I. Stoica, D. Harlan, and E. Harris. Scarlett: coping with skewed content popularity in mapreduce clusters. In Proceedings of the sixth conference on Computer syste, 211. [7] L. N. Bairavasundaram, A. C. Arpaci-Dusseau, R. H. Arpaci-Dusseau, G. R. Goodson, and B. Schroeder. An analysis of data corruption in the storage stack. In ACM Transactions on Storage (TOS), 28. [8] Y. Chen, S. Alspaugh, and R. Katz. Interactive analytical processing in big data syste: A cross-industry study of mapreduce workloads. In VLDB, 212. [9] Y. Chen, S. Alspaugh, and R. H. Katz. Design insights for mapreduce from diverse production workloads
7 [1] Y. Chen, K. Srinivasan, G. Goodson, and R. Katz. Design implications for enterprise storage syste via multi-dimensional trace analysis. Proceedings of the Twenty-Third ACM Symposium on Operating Syste Principles, 211. [11] J. Dai. Performance, utilization and power characterization of hadoop clusters using hibench. In Hadoop in China, 21. [12] C. Delimitrou, S. Sankar, K. Vaid, and C. Kozyrakis. Decoupling datacenter studies from access to large-scale applications: A modeling approach for storage workloads. In IISWC, 211. [13] D. Ersoz, M. S. Yousif, and C. R. Das. Characterizing network traffic in a cluster-based, multi-tier data center. In ICDCS, 27. [14] B. Fan, W. Tantisiriroj, L. Xiao, and G. Gibson. Diskreduce: Raid for data-intensive scalable computing. In Proceedings of the 4th Annual Workshop on Petascale Data Storage, 29. [15] A. Iamnitchi, S. Doraimani, and G. Garzoglio. Workload characterization in a high-energy data grid and impact on resource management. In CLUSTER, 29. [16] S. Kavalanekar, B. Worthington, Q. Zhang, and V. Sharda. Characterization of storage workload traces from production windows servers. In IISWC, 28. [17] S. Kavulya, J. Tan, R. Gandhi, and P. Narasimhan. An analysis of traces from a production mapreduce cluster. In CCGrid, 21. [18] C. Kozyrakis, A. Kansal, S. Sankar, and K. Vaid. Server engineering insights for large-scale online services. In IEEE micro, 21. [19] A. Kyrola, G. Blelloch, and C. Guestrin. Graphchi: Large-scale graph computation on just a pc. In OSDI, 212. [2] A. Pavlo, E. Paulson, A. Rasin, D. J. Abadi, D. J. DeWitt, S. Madden, and M. Stonebraker. A comparison of approaches to large-scale data analysis. In ACM SIGMOD, 29. [21] Z. Ren, X. Xu, J. Wan, W. Shi, and M. Zhou. Workload characterization on a production hadoop cluster: A case study on taobao. In IISWC, 212. [22] D. S. Roselli, J. R. Lorch, T. E. Anderson, et al. A comparison of file system workloads. In USENIX Annual Technical Conference, 2, pages 41 54, 2. [23] S. Sankar and K. Vaid. Storage characterization for unstructured data in online services applications. In IISWC, 29. [24] F. Wang, Q. Xin, B. Hong, S. A. Brandt, E. L. Miller, D. D. Long, and T. T. McLarty. File system workload analysis for large scale scientific computing applications. In MSST, 24. [] L. Wang, J. Zhan, C. Luo, Y. Zhu, Q. Yang, Y. He, W. Gao, Z. Jia, Y. Shi, S. Zhang, et al. Bigdatabench: A big data benchmark suite from internet services. In HPCA,
HiBench Introduction. Carson Wang ([email protected]) Software & Services Group
HiBench Introduction Carson Wang ([email protected]) Agenda Background Workloads Configurations Benchmark Report Tuning Guide Background WHY Why we need big data benchmarking systems? WHAT What is
Evaluating Task Scheduling in Hadoop-based Cloud Systems
2013 IEEE International Conference on Big Data Evaluating Task Scheduling in Hadoop-based Cloud Systems Shengyuan Liu, Jungang Xu College of Computer and Control Engineering University of Chinese Academy
Characterizing Workload of Web Applications on Virtualized Servers
Characterizing Workload of Web Applications on Virtualized Servers Xiajun Wang 1,2, Song Huang 2, Song Fu 2 and Krishna Kavi 2 1 Department of Information Engineering Changzhou Institute of Light Industry
HiBench Installation. Sunil Raiyani, Jayam Modi
HiBench Installation Sunil Raiyani, Jayam Modi Last Updated: May 23, 2014 CONTENTS Contents 1 Introduction 1 2 Installation 1 3 HiBench Benchmarks[3] 1 3.1 Micro Benchmarks..............................
Analysis and Optimization of Massive Data Processing on High Performance Computing Architecture
Analysis and Optimization of Massive Data Processing on High Performance Computing Architecture He Huang, Shanshan Li, Xiaodong Yi, Feng Zhang, Xiangke Liao and Pan Dong School of Computer Science National
BPOE Research Highlights
BPOE Research Highlights Jianfeng Zhan ICT, Chinese Academy of Sciences 2013-10- 9 http://prof.ict.ac.cn/jfzhan INSTITUTE OF COMPUTING TECHNOLOGY What is BPOE workshop? B: Big Data Benchmarks PO: Performance
Performance Characterization of Hadoop and DataMPI Based on Amdahl s Second Law
214 9th IEEE International Conference on Networking, Architecture, and Storage Performance Characterization of and Based on Amdahl s Second Law Fan Liang 1,2, Chen Feng 1,2, Xiaoyi Lu 3, Zhiwei Xu 1 1
Architecting for the next generation of Big Data Hortonworks HDP 2.0 on Red Hat Enterprise Linux 6 with OpenJDK 7
Architecting for the next generation of Big Data Hortonworks HDP 2.0 on Red Hat Enterprise Linux 6 with OpenJDK 7 Yan Fisher Senior Principal Product Marketing Manager, Red Hat Rohit Bakhshi Product Manager,
Evaluating HDFS I/O Performance on Virtualized Systems
Evaluating HDFS I/O Performance on Virtualized Systems Xin Tang [email protected] University of Wisconsin-Madison Department of Computer Sciences Abstract Hadoop as a Service (HaaS) has received increasing
CloudRank-D:A Benchmark Suite for Private Cloud Systems
CloudRank-D:A Benchmark Suite for Private Cloud Systems Jing Quan Institute of Computing Technology, Chinese Academy of Sciences and University of Science and Technology of China HVC tutorial in conjunction
Maximizing Hadoop Performance and Storage Capacity with AltraHD TM
Maximizing Hadoop Performance and Storage Capacity with AltraHD TM Executive Summary The explosion of internet data, driven in large part by the growth of more and more powerful mobile devices, has created
BigDataBench. Khushbu Agarwal
BigDataBench Khushbu Agarwal Last Updated: May 23, 2014 CONTENTS Contents 1 What is BigDataBench? [1] 1 1.1 SUMMARY.................................. 1 1.2 METHODOLOGY.............................. 1 2
Intel Distribution for Apache Hadoop* Software: Optimization and Tuning Guide
OPTIMIZATION AND TUNING GUIDE Intel Distribution for Apache Hadoop* Software Intel Distribution for Apache Hadoop* Software: Optimization and Tuning Guide Configuring and managing your Hadoop* environment
Characterizing Task Usage Shapes in Google s Compute Clusters
Characterizing Task Usage Shapes in Google s Compute Clusters Qi Zhang University of Waterloo [email protected] Joseph L. Hellerstein Google Inc. [email protected] Raouf Boutaba University of Waterloo [email protected]
Big Fast Data Hadoop acceleration with Flash. June 2013
Big Fast Data Hadoop acceleration with Flash June 2013 Agenda The Big Data Problem What is Hadoop Hadoop and Flash The Nytro Solution Test Results The Big Data Problem Big Data Output Facebook Traditional
On Big Data Benchmarking
On Big Data Benchmarking 1 Rui Han and 2 Xiaoyi Lu 1 Department of Computing, Imperial College London 2 Ohio State University [email protected], [email protected] Abstract Big data systems address
DELL s Oracle Database Advisor
DELL s Oracle Database Advisor Underlying Methodology A Dell Technical White Paper Database Solutions Engineering By Roger Lopez Phani MV Dell Product Group January 2010 THIS WHITE PAPER IS FOR INFORMATIONAL
Hadoop on the Gordon Data Intensive Cluster
Hadoop on the Gordon Data Intensive Cluster Amit Majumdar, Scientific Computing Applications Mahidhar Tatineni, HPC User Services San Diego Supercomputer Center University of California San Diego Dec 18,
On Big Data Benchmarking
On Big Data Benchmarking 1 Rui Han and 2 Xiaoyi Lu 1 Department of Computing, Imperial College London 2 Ohio State University [email protected], [email protected] Abstract Big data systems address
Analysing Large Web Log Files in a Hadoop Distributed Cluster Environment
Analysing Large Files in a Hadoop Distributed Cluster Environment S Saravanan, B Uma Maheswari Department of Computer Science and Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham,
A Study on Workload Imbalance Issues in Data Intensive Distributed Computing
A Study on Workload Imbalance Issues in Data Intensive Distributed Computing Sven Groot 1, Kazuo Goda 1, and Masaru Kitsuregawa 1 University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505, Japan Abstract.
A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM
A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM Sneha D.Borkar 1, Prof.Chaitali S.Surtakar 2 Student of B.E., Information Technology, J.D.I.E.T, [email protected] Assistant Professor, Information
Optimization of Cluster Web Server Scheduling from Site Access Statistics
Optimization of Cluster Web Server Scheduling from Site Access Statistics Nartpong Ampornaramveth, Surasak Sanguanpong Faculty of Computer Engineering, Kasetsart University, Bangkhen Bangkok, Thailand
Outline. High Performance Computing (HPC) Big Data meets HPC. Case Studies: Some facts about Big Data Technologies HPC and Big Data converging
Outline High Performance Computing (HPC) Towards exascale computing: a brief history Challenges in the exascale era Big Data meets HPC Some facts about Big Data Technologies HPC and Big Data converging
Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1
Performance Study Performance Characteristics of and RDM VMware ESX Server 3.0.1 VMware ESX Server offers three choices for managing disk access in a virtual machine VMware Virtual Machine File System
Unstructured Data Accelerator (UDA) Author: Motti Beck, Mellanox Technologies Date: March 27, 2012
Unstructured Data Accelerator (UDA) Author: Motti Beck, Mellanox Technologies Date: March 27, 2012 1 Market Trends Big Data Growing technology deployments are creating an exponential increase in the volume
Architecture Support for Big Data Analytics
Architecture Support for Big Data Analytics Ahsan Javed Awan EMJD-DC (KTH-UPC) (http://uk.linkedin.com/in/ahsanjavedawan/) Supervisors: Mats Brorsson(KTH), Eduard Ayguade(UPC), Vladimir Vlassov(KTH) 1
Can High-Performance Interconnects Benefit Memcached and Hadoop?
Can High-Performance Interconnects Benefit Memcached and Hadoop? D. K. Panda and Sayantan Sur Network-Based Computing Laboratory Department of Computer Science and Engineering The Ohio State University,
Exploring RAID Configurations
Exploring RAID Configurations J. Ryan Fishel Florida State University August 6, 2008 Abstract To address the limits of today s slow mechanical disks, we explored a number of data layouts to improve RAID
GraySort on Apache Spark by Databricks
GraySort on Apache Spark by Databricks Reynold Xin, Parviz Deyhim, Ali Ghodsi, Xiangrui Meng, Matei Zaharia Databricks Inc. Apache Spark Sorting in Spark Overview Sorting Within a Partition Range Partitioner
Implement Hadoop jobs to extract business value from large and varied data sets
Hadoop Development for Big Data Solutions: Hands-On You Will Learn How To: Implement Hadoop jobs to extract business value from large and varied data sets Write, customize and deploy MapReduce jobs to
Performance measurement of a Hadoop Cluster
Performance measurement of a Hadoop Cluster Technical white paper Created: February 8, 2012 Last Modified: February 23 2012 Contents Introduction... 1 The Big Data Puzzle... 1 Apache Hadoop and MapReduce...
Linux Performance Optimizations for Big Data Environments
Linux Performance Optimizations for Big Data Environments Dominique A. Heger Ph.D. DHTechnologies (Performance, Capacity, Scalability) www.dhtusa.com Data Nubes (Big Data, Hadoop, ML) www.datanubes.com
Federated Cloud-based Big Data Platform in Telecommunications
Federated Cloud-based Big Data Platform in Telecommunications Chao Deng dengchao@chinamobilecom Yujian Du duyujian@chinamobilecom Ling Qian qianling@chinamobilecom Zhiguo Luo luozhiguo@chinamobilecom Meng
Clash of the Titans: MapReduce vs. Spark for Large Scale Data Analytics
Clash of the Titans: MapReduce vs. Spark for Large Scale Data Analytics Juwei Shi, Yunjie Qiu, Umar Farooq Minhas, Limei Jiao, Chen Wang, Berthold Reinwald, and Fatma Özcan IBM Research China IBM Almaden
Benchmarking Hadoop & HBase on Violin
Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages
Characterizing Task Usage Shapes in Google s Compute Clusters
Characterizing Task Usage Shapes in Google s Compute Clusters Qi Zhang 1, Joseph L. Hellerstein 2, Raouf Boutaba 1 1 University of Waterloo, 2 Google Inc. Introduction Cloud computing is becoming a key
Accelerating Server Storage Performance on Lenovo ThinkServer
Accelerating Server Storage Performance on Lenovo ThinkServer Lenovo Enterprise Product Group April 214 Copyright Lenovo 214 LENOVO PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER
Can the Elephants Handle the NoSQL Onslaught?
Can the Elephants Handle the NoSQL Onslaught? Avrilia Floratou, Nikhil Teletia David J. DeWitt, Jignesh M. Patel, Donghui Zhang University of Wisconsin-Madison Microsoft Jim Gray Systems Lab Presented
Efficient Data Replication Scheme based on Hadoop Distributed File System
, pp. 177-186 http://dx.doi.org/10.14257/ijseia.2015.9.12.16 Efficient Data Replication Scheme based on Hadoop Distributed File System Jungha Lee 1, Jaehwa Chung 2 and Daewon Lee 3* 1 Division of Supercomputing,
CSE-E5430 Scalable Cloud Computing Lecture 2
CSE-E5430 Scalable Cloud Computing Lecture 2 Keijo Heljanko Department of Computer Science School of Science Aalto University [email protected] 14.9-2015 1/36 Google MapReduce A scalable batch processing
How To Analyze Log Files In A Web Application On A Hadoop Mapreduce System
Analyzing Web Application Log Files to Find Hit Count Through the Utilization of Hadoop MapReduce in Cloud Computing Environment Sayalee Narkhede Department of Information Technology Maharashtra Institute
Elasticsearch on Cisco Unified Computing System: Optimizing your UCS infrastructure for Elasticsearch s analytics software stack
Elasticsearch on Cisco Unified Computing System: Optimizing your UCS infrastructure for Elasticsearch s analytics software stack HIGHLIGHTS Real-Time Results Elasticsearch on Cisco UCS enables a deeper
Introduction to Big Data! with Apache Spark" UC#BERKELEY#
Introduction to Big Data! with Apache Spark" UC#BERKELEY# This Lecture" The Big Data Problem" Hardware for Big Data" Distributing Work" Handling Failures and Slow Machines" Map Reduce and Complex Jobs"
Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays
Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays Database Solutions Engineering By Murali Krishnan.K Dell Product Group October 2009
Introduction. Various user groups requiring Hadoop, each with its own diverse needs, include:
Introduction BIG DATA is a term that s been buzzing around a lot lately, and its use is a trend that s been increasing at a steady pace over the past few years. It s quite likely you ve also encountered
Virtuoso and Database Scalability
Virtuoso and Database Scalability By Orri Erling Table of Contents Abstract Metrics Results Transaction Throughput Initializing 40 warehouses Serial Read Test Conditions Analysis Working Set Effect of
An Oracle White Paper June 2012. High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database
An Oracle White Paper June 2012 High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database Executive Overview... 1 Introduction... 1 Oracle Loader for Hadoop... 2 Oracle Direct
A USE CASE OF BIG DATA EXPLORATION & ANALYSIS WITH HADOOP: STATISTICS REPORT GENERATION
A USE CASE OF BIG DATA EXPLORATION & ANALYSIS WITH HADOOP: STATISTICS REPORT GENERATION Sumitha VS 1, Shilpa V 2 1 M.E. Final Year, Department of Computer Science Engineering (IT), UVCE, Bangalore, [email protected]
2009 Oracle Corporation 1
The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material,
Shareability and Locality Aware Scheduling Algorithm in Hadoop for Mobile Cloud Computing
Shareability and Locality Aware Scheduling Algorithm in Hadoop for Mobile Cloud Computing Hsin-Wen Wei 1,2, Che-Wei Hsu 2, Tin-Yu Wu 3, Wei-Tsong Lee 1 1 Department of Electrical Engineering, Tamkang University
GraySort and MinuteSort at Yahoo on Hadoop 0.23
GraySort and at Yahoo on Hadoop.23 Thomas Graves Yahoo! May, 213 The Apache Hadoop[1] software library is an open source framework that allows for the distributed processing of large data sets across clusters
Hadoop Cluster Applications
Hadoop Overview Data analytics has become a key element of the business decision process over the last decade. Classic reporting on a dataset stored in a database was sufficient until recently, but yesterday
Dell Reference Configuration for Hortonworks Data Platform
Dell Reference Configuration for Hortonworks Data Platform A Quick Reference Configuration Guide Armando Acosta Hadoop Product Manager Dell Revolutionary Cloud and Big Data Group Kris Applegate Solution
Data Warehousing and Analytics Infrastructure at Facebook. Ashish Thusoo & Dhruba Borthakur athusoo,[email protected]
Data Warehousing and Analytics Infrastructure at Facebook Ashish Thusoo & Dhruba Borthakur athusoo,[email protected] Overview Challenges in a Fast Growing & Dynamic Environment Data Flow Architecture,
Telecom Data processing and analysis based on Hadoop
COMPUTER MODELLING & NEW TECHNOLOGIES 214 18(12B) 658-664 Abstract Telecom Data processing and analysis based on Hadoop Guofan Lu, Qingnian Zhang *, Zhao Chen Wuhan University of Technology, Wuhan 4363,China
Big Data and Apache Hadoop s MapReduce
Big Data and Apache Hadoop s MapReduce Michael Hahsler Computer Science and Engineering Southern Methodist University January 23, 2012 Michael Hahsler (SMU/CSE) Hadoop/MapReduce January 23, 2012 1 / 23
Big Data. Value, use cases and architectures. Petar Torre Lead Architect Service Provider Group. Dubrovnik, Croatia, South East Europe 20-22 May, 2013
Dubrovnik, Croatia, South East Europe 20-22 May, 2013 Big Data Value, use cases and architectures Petar Torre Lead Architect Service Provider Group 2011 2013 Cisco and/or its affiliates. All rights reserved.
An Experimental Approach Towards Big Data for Analyzing Memory Utilization on a Hadoop cluster using HDFS and MapReduce.
An Experimental Approach Towards Big Data for Analyzing Memory Utilization on a Hadoop cluster using HDFS and MapReduce. Amrit Pal Stdt, Dept of Computer Engineering and Application, National Institute
A Novel Cloud Based Elastic Framework for Big Data Preprocessing
School of Systems Engineering A Novel Cloud Based Elastic Framework for Big Data Preprocessing Omer Dawelbeit and Rachel McCrindle October 21, 2014 University of Reading 2008 www.reading.ac.uk Overview
MapReduce Evaluator: User Guide
University of A Coruña Computer Architecture Group MapReduce Evaluator: User Guide Authors: Jorge Veiga, Roberto R. Expósito, Guillermo L. Taboada and Juan Touriño December 9, 2014 Contents 1 Overview
A Micro-benchmark Suite for Evaluating Hadoop RPC on High-Performance Networks
A Micro-benchmark Suite for Evaluating Hadoop RPC on High-Performance Networks Xiaoyi Lu, Md. Wasi- ur- Rahman, Nusrat Islam, and Dhabaleswar K. (DK) Panda Network- Based Compu2ng Laboratory Department
Maximum performance, minimal risk for data warehousing
SYSTEM X SERVERS SOLUTION BRIEF Maximum performance, minimal risk for data warehousing Microsoft Data Warehouse Fast Track for SQL Server 2014 on System x3850 X6 (95TB) The rapid growth of technology has
Performance Analysis of Mixed Distributed Filesystem Workloads
Performance Analysis of Mixed Distributed Filesystem Workloads Esteban Molina-Estolano, Maya Gokhale, Carlos Maltzahn, John May, John Bent, Scott Brandt Motivation Hadoop-tailored filesystems (e.g. CloudStore)
Towards an Optimized Big Data Processing System
Towards an Optimized Big Data Processing System The Doctoral Symposium of the IEEE/ACM CCGrid 2013 Delft, The Netherlands Bogdan Ghiţ, Alexandru Iosup, and Dick Epema Parallel and Distributed Systems Group
Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820
Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 This white paper discusses the SQL server workload consolidation capabilities of Dell PowerEdge R820 using Virtualization.
Ignify ecommerce. Item Requirements Notes
wwwignifycom Tel (888) IGNIFY5 sales@ignifycom Fax (408) 516-9006 Ignify ecommerce Server Configuration 1 Hardware Requirement (Minimum configuration) Item Requirements Notes Operating System Processor
Benchmarking Cassandra on Violin
Technical White Paper Report Technical Report Benchmarking Cassandra on Violin Accelerating Cassandra Performance and Reducing Read Latency With Violin Memory Flash-based Storage Arrays Version 1.0 Abstract
Hadoop: Embracing future hardware
Hadoop: Embracing future hardware Suresh Srinivas @suresh_m_s Page 1 About Me Architect & Founder at Hortonworks Long time Apache Hadoop committer and PMC member Designed and developed many key Hadoop
Performance Comparison of SQL based Big Data Analytics with Lustre and HDFS file systems
Performance Comparison of SQL based Big Data Analytics with Lustre and HDFS file systems Rekha Singhal and Gabriele Pacciucci * Other names and brands may be claimed as the property of others. Lustre File
Fault Tolerance in Hadoop for Work Migration
1 Fault Tolerance in Hadoop for Work Migration Shivaraman Janakiraman Indiana University Bloomington ABSTRACT Hadoop is a framework that runs applications on large clusters which are built on numerous
Apache Hadoop. Alexandru Costan
1 Apache Hadoop Alexandru Costan Big Data Landscape No one-size-fits-all solution: SQL, NoSQL, MapReduce, No standard, except Hadoop 2 Outline What is Hadoop? Who uses it? Architecture HDFS MapReduce Open
PSAM, NEC PCIe SSD Appliance for Microsoft SQL Server (Reference Architecture) September 11 th, 2014 NEC Corporation
PSAM, NEC PCIe SSD Appliance for Microsoft SQL Server (Reference Architecture) September 11 th, 2014 NEC Corporation 1. Overview of NEC PCIe SSD Appliance for Microsoft SQL Server Page 2 NEC Corporation
HPCHadoop: A framework to run Hadoop on Cray X-series supercomputers
HPCHadoop: A framework to run Hadoop on Cray X-series supercomputers Scott Michael, Abhinav Thota, and Robert Henschel Pervasive Technology Institute Indiana University Bloomington, IN, USA Email: [email protected]
Part V Applications. What is cloud computing? SaaS has been around for awhile. Cloud Computing: General concepts
Part V Applications Cloud Computing: General concepts Copyright K.Goseva 2010 CS 736 Software Performance Engineering Slide 1 What is cloud computing? SaaS: Software as a Service Cloud: Datacenters hardware
Large-Scale Data Sets Clustering Based on MapReduce and Hadoop
Journal of Computational Information Systems 7: 16 (2011) 5956-5963 Available at http://www.jofcis.com Large-Scale Data Sets Clustering Based on MapReduce and Hadoop Ping ZHOU, Jingsheng LEI, Wenjun YE
Fast, Low-Overhead Encryption for Apache Hadoop*
Fast, Low-Overhead Encryption for Apache Hadoop* Solution Brief Intel Xeon Processors Intel Advanced Encryption Standard New Instructions (Intel AES-NI) The Intel Distribution for Apache Hadoop* software
Comparative analysis of mapreduce job by keeping data constant and varying cluster size technique
Comparative analysis of mapreduce job by keeping data constant and varying cluster size technique Mahesh Maurya a, Sunita Mahajan b * a Research Scholar, JJT University, MPSTME, Mumbai, India,[email protected]
Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases. Lecture 15
Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases Lecture 15 Big Data Management V (Big-data Analytics / Map-Reduce) Chapter 16 and 19: Abideboul et. Al. Demetris
RevoScaleR Speed and Scalability
EXECUTIVE WHITE PAPER RevoScaleR Speed and Scalability By Lee Edlefsen Ph.D., Chief Scientist, Revolution Analytics Abstract RevoScaleR, the Big Data predictive analytics library included with Revolution
Scaling the Deployment of Multiple Hadoop Workloads on a Virtualized Infrastructure
Scaling the Deployment of Multiple Hadoop Workloads on a Virtualized Infrastructure The Intel Distribution for Apache Hadoop* software running on 808 VMs using VMware vsphere Big Data Extensions and Dell
Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems
Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Applied Technology Abstract By migrating VMware virtual machines from one physical environment to another, VMware VMotion can
NoSQL Performance Test In-Memory Performance Comparison of SequoiaDB, Cassandra, and MongoDB
bankmark UG (haftungsbeschränkt) Bahnhofstraße 1 9432 Passau Germany www.bankmark.de [email protected] T +49 851 25 49 49 F +49 851 25 49 499 NoSQL Performance Test In-Memory Performance Comparison of SequoiaDB,
EMC Unified Storage for Microsoft SQL Server 2008
EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information
The Comprehensive Performance Rating for Hadoop Clusters on Cloud Computing Platform
The Comprehensive Performance Rating for Hadoop Clusters on Cloud Computing Platform Fong-Hao Liu, Ya-Ruei Liou, Hsiang-Fu Lo, Ko-Chin Chang, and Wei-Tsong Lee Abstract Virtualization platform solutions
America s Most Wanted a metric to detect persistently faulty machines in Hadoop
America s Most Wanted a metric to detect persistently faulty machines in Hadoop Dhruba Borthakur and Andrew Ryan dhruba,[email protected] Presented at IFIP Workshop on Failure Diagnosis, Chicago June
Energy Efficient MapReduce
Energy Efficient MapReduce Motivation: Energy consumption is an important aspect of datacenters efficiency, the total power consumption in the united states has doubled from 2000 to 2005, representing
Maximizing Hadoop Performance with Hardware Compression
Maximizing Hadoop Performance with Hardware Compression Robert Reiner Director of Marketing Compression and Security Exar Corporation November 2012 1 What is Big? sets whose size is beyond the ability
