Big Data Bench 2.1. User s Manual
|
|
|
- Helena Ray
- 10 years ago
- Views:
Transcription
1 Big Data Bench 2.1 User s Manual
2 Contents 1 Background Introduction of Big Data Bench Our Benchmarking Methodology Chosen Data sets and Workloads Use case Prerequisite Software Packages Data Generation Text Data Graph Data Table Data Workloads MicroBenchmarks Sort & Wordcount & Grep BFS (Breath first search) Basic Datastore Operations Write Read Scan Relational Query Search Engine Search Engine Web Serving SVM PageRank Index Social Network Web Serving Kmeans Connected Components Ecommerce System Ecommerce System Web Serving Collaborative Filtering Recommendation Bayes Reference
3 This document presents information on BigDataBench----a big data benchmark suite from web search engines, including a brief introduction and the usage of it. The information and specifications contained are for researchers who are interested in big data benchmarking. Publishing information: Release 2.1 Date 22/11/2013 Contact information: Website: 1 Background BigDataBench is a big data benchmark suite from web search engines. Now, we provide the second release of BigDataBench for downloading. In this edition, we pay attention to big data workloads in three important applications domains in Internet services: search engine, social networks, and electronic commerce according to a widely acceptable metrics the number of page views and daily visitors. The second release (BigDataBench 2.0) has 6 categories and 19 types big data applications which are the most important domain in Internet services. It also provides an innovative data generation tool to generate scalable volumes of big data from a small-scale of real data preserving semantics and locality of the real data. The data sets in BigDataBench are generated by the tool. Users can also combine other applications in BigDataBench according to their own requirements. 2 Introduction of Big Data Bench Our Benchmarking Methodology The philosophy of BigDataBench methodology is From Real System to Real System. We widely investigate the typical Internet applications domains, then characterize the big data benchmark from two aspects: data sets and workloads. The methodology of our benchmark is shown in Figure1. First of all, we investigate the application category of Internet services, and choose the dominant application domains for further characterizing based on the number of page views. According to the analysis in [36], the top three application domains hold 80% page views of all the Internet service. They are adopted in the BigDataBench application domain, which include search engine, social networks and electronic commerce. These domains represent the typical applications of Internet service, and the applications represent a wide range of categories to access the big data benchmarking requirement of workload diversity. 3
4 Representative Real Data Sets Data types : Structured data Unstructured data Semi-Structured data Data Sources: Table data Text data Graph data Extended Big Data Sets Preserving 4V Investigate Typical Application Domains Diverse and Important workloads Synthetic data generation tool preserving data characteristics Application types : Offline analytics Realtime analytics Online services Basic & Important Algorithms and Operations Extended.. Represent Software Stack Extended... Big Data Workloads BigDataBench: Big Data Benchmark Suite Figure1. BigDataBench Methodology After choosing the application domains, we characterize the data sets and workloads of BigDataBench based on the requirements proposed in the next subsection. 2.2 Chosen Data sets and Workloads As analyzed in our article, the data sets should cover different data types of big data applications. Based on the investigation of three application domains, we collect six representative real data sets. Table 2 shows the characteristics of six real data sets and their corresponding workloads and Table 3 shows the Diversity of data sets. The original data is real, however, not big. We need to scale the volume of the data while keep the veracity. Table 2: The summary of six real data sets Table3.Data Diversity 4
5 Further, we choose three different application domains to build our benchmark, because they cover the most general big data workloads. For search engine, the typical workloads are Web Serving, Index and PageRank. For web applications, Web Serving is a necessary function module to provide an entrance for the service. Index is also an important module, which is widely used to speed up the document searching efficiency. PageRank is an important part to rank the searching results for search engines. For ecommerce system, the typical workloads include Web Serving and Off-line Data Analysis. The Web Serving workload shows information of products and provides a convenient platform for users to buy or sell products. Off-line Data Analysis workload mainly does the data mining calculation to promote the buy rate of products. In BigDataBench we select these Data Analysis workloads: Collaborate Filtering and Bayes. For social networking service, the typical workloads are Web Serving and graph operations. The Web Serving workload aims to interactive with each user and show proper contents. Graph operations are series of calculations upon graph data. In BigDataBench, we use Kmeans and breath first search to construct the workloads of graph operation. Besides, we defined three operation sets: Micro Benchmarks (sort, grep, word count and BFS), Database Basic Datastore Operations(read/write/scan) and Relational Query(scan/aggregation/join) to provide more workload choices. Table 6: The Summary of BigDataBench. (Benchmark i-(1,..,j) means 1th,..,j-th implementation of Benchmark i, respectively.) By choosing different operations and environment, it's possible for users to compose specified benchmarks to test for specified purposes. For example basic applications under MapReduce environment can be chosen to test if a type of architecture is proper for doing MapReduce jobs. 5
6 2.3 Use case This subsection will describe the application example of BigDataBench. General procedures of using BigDataBench are as follows: 1. Choose the proper workloads. Select workloads with the specified purpose, for example basic operations in Hadoop environment or typical search engine workloads. 2. Prepare the environment for the corresponding workloads. Before running the experiments, the environment should be prepared first for example the Hadoop environment. 3. Prepare the needed data for workloads. Generally, it's necessary to generate data for the experiments with the data generation tools. 4. Run the corresponding applications. With all preparations done, it's needed to start the workload applications and/or performance monitoring tools in this step. 5. Collect results of the experiments. Here we provide two usage case to show how to use our benchmark to achieve different evaluating task. Case one: from the perspective of maintainer of web site If application scene and software are known, the maintainer want to choose suitable hardware facilities. For example, someone have develop a web site of search engine, and use Hadoop, Hive, Hbase as their infrastructures. Now he want evaluate if specifical hardware facilities are suitable for his scene using our benchmark. First, he should select the workloads of search engine, saying search web serving, indexing, and PageRank. Also, the basic operations like sort, word count, and grep should be contained. To covering the Hive and Hbase workload, he also should select the hive queries and read, write, scan of Hbase. Next, he should prepare the environment and corresponding data. Finally, he runs each workloads selected and observe the resluts to make evaluation. Case two: from the perspective of architecture Suppose that someone is planning to design a new machine for common big data usage. It is not enough to run subset of the workloads, since he doesn't know what special application scene and soft stack the new machine is used for. The comprehensive evaluation is needed, so that he should run every workload to reflect the performance of different application scene, program framework, data warehouse, and NoSQL database. Only in this way, he can say his new design is indeed beneficial for big data usage. 6
7 Other use cases of BigDataBench include: Web serving applications: Using BigDataBench to study the architecture feature of Web Serving applications in big data scenario(search Engine). Data Analysis workload s feature: Another kind of use case is to observe the typical data analysis workloads (for example PageRank, Recommendation) architectural characters. Different storage system: In BigDataBench, we also provide different data management systems(for example HBase, Cassandra, hive). Users can choose one or some of them to observe the architectural feature by running the basic operations(sort, grep, wordcount). Different programing models: Users can use BigDataBench to study three different programing models: MPI, MapReduce and Spark. 3 Prerequisite Software Packages Software Version Download Hadoop HBase Cassandra MongoDB Mahout Hive #GettingStarted-InstallationandConfiguration Spark Impala cloudera-docs/impala/latest/installing-and-using-impala/ciiu_install.html MPICH GSL
8 4 Data Generation In Big Data Bench, We have three kinds of data generation tools to generate text data, graph data and table data. Generally, the process contains two sub steps. The first step is to prepare for generate big data by analyzing the characteristics of the seed data which is real owned by users, and the second sub step is to expand the seed data to big data maintaining the features of seed data. 4.1 Text Data We provide a data generation tool which can generate data with user specified data scale. In BigDataBench2.1 we analyze the wiki data sets to generate model, and our text data generate tool can produce the big data based on the model. Usage Generate the data sh gen_data.sh model_name number_of_files lines_per_file <model_name>: the name of model used to generate new data < number_of_files >: the number of files to generate < lines_per_file >: number of lines in each file For example: sh gen_data.sh lda_wiki1w This command will generate 10 files each contains lines by using model wiki1w. Note: The tool Need to install GSL - GNU Scientific Library. Before you run the program, Please make sure that GSL is ready. 4.2 Graph Data Here we use Kronecker to generate data that is both mathematically tractable and have all the structural properties from the real data set. ( In BigDataBench2.1 we analyze the Google, Facebook and Amazon data sets to generate model, and our graph data generate tool can produce the big data based on the model. 8
9 Usage Generate the data./krongen \ -o:output graph file name (default:'graph.txt') -m:matrix (in Maltab notation) (default:' ; ') -i:iterations of Kronecker product (default:5) -s:random seed (0 - time seed) (default:0) For example:./krongen -o:../data-outfile/amazon_gen.txt -m:" ; " -i: Table Data We use Parallel Data Generation Framework to generate table data. The Parallel Data Generation Framework (PDGF) is a generic data generator for database benchmarking. PDGF was designed to take advantage of today's multi-core processors and large clusters of computers to generate large amounts of synthetic benchmark data very fast. PDGF uses a fully computational approach and is a pure Java implementation which makes it very portable. You can use your own configuration file to generate table data. Usage 1. Prepare the configuration files The configuration files are written in XML and are by default stored in the config folder. PDGF-V2 is configured with 2 XML files: the schema configuration and the generation configuration. The schema configuration (demo-schema.xml) defines the structure of the data and the generation rules, while the generation configuration (demo-generation.xml) defines the output and the post-processing of the generated data. For the demo, we will generate the files demo-schema.xml and demo-generation.xml which are also contained in the provided.gz file. Initially, we will generate two tables: OS_ORDER and OS_ORDER_ITEM. demo-schema.xml demo-generation.xml 2. Generate data After creating both demo-schema.xml and demo-generation.xml a first data generation run can be performed. Therefore it is necessary to open a shell, change into the PDGFEnvironment directory. java -XX:NewRatio=1 -jar pdgf.jar -l demo-schema.xml -l demo-generation.xml -c s 9
10 5 Workloads After generating the big data, we integrate a series of workloads to process the data in our big data benchmarks. In this part, we will introduction how to run the Big Data Benchmark for each workload. It mainly has two steps. The first step is to generate the big data and the second step is to run the applications using the data we generated. After unpacking the package, users will see six main folders General, Basic Operations, Database Basic Operations, Data Warehouse Basic Operations, Search Engine, Social Network and Ecommerce System. 5.1 MicroBenchmarks Sort & Wordcount & Grep Note: Please refer to BigDataBenchV1 for the above three workloads in MapReduce environment. The MPI and Spark versions will be published soon BFS (Breath first search) To prepare 1. Please decompress the file: BigDataBench_V2.1.tar.gz tar xzf BigDataBench_V2.1.tar.gz 2. Open the DIR: graph500 cd BigDataBench_V2.1/MicroBenchmarks/BFS/graph500 Basic Command-line Usage: mpirun -np PROCESS_NUM graph500/mpi/graph500_mpi_simple VERTEX_SIZE Parameters: PROCESS_NUM: number of process; VERTEX_SIZE: number of vertex, the total number of vertex is 2^ VERTEX_SIZE For example: Set the number of total running process of to be 4, the vertex number to be 2^20, the command is: mpirun -np 4 graph500/mpi/graph500_mpi_simple 20 10
11 5.2 Basic Datastore Operations We use YCSB to run database basic operations. And, we provide three ways: HBase, Cassandra and MongoDB to run operations for each operation. To Prepare Obtain YCSB: wget tar zxvf ycsb tar.gz cd ycsb Or clone the git repository and build: git clone git://github.com/brianfrankcooper/ycsb.git cd YCSB mvn clean package We name $YCSB as the path of YCSB for the following steps Write 1. For HBase cd $YCSB sh bin/ycsb load hbase -P workloads/workloadc -p threads=<thread-numbers> -p columnfamily=<family> -p recordcount=<recordcount-value> -p hosts=<hostip> -s>load.dat A few notes about this command: <thread-number> : the number of client threads, this is often done to increase the amount of load offered against the database. <family> : In Hbase case, we used it to set database column. You should have database usertable with column family before running this command. Then all data will be loaded into database usertable with column family. <recorcount-value>: the total records for this benchmark. For example, when you want to load 10GB data you shout set it to <hostip> : the IP of the hbase s master node. 2. For Cassandra Before you run the benchmark, you should create the keyspace and column family in the Cassandra. You can use the following commands to create it: CREATE KEYSPACE usertable with placement_strategy = 'org.apache.cassandra.locator.simplestrategy' and strategy_options = {replication_factor:2}; use usertable; 11
12 create column family data with comparator=utf8type and default_validation_class=utf8type and key_validation_class=utf8type; cd $YCSB sh bin/ycsb load cassandra-10 -P workloads/workloadc -p threads=<thread-numbers> -p recordcount=<recorcount-value> -p hosts=<hostips> -s >load.dat A few notes about this command: <thread-number> : the number of client threads, this is often done to increase the amount of load offered against the database. <recorcount-value> : the total records for this benchmark. For example, when you want to load 10GB data you shout set it to <hostips> : the IP of cassandra s nodes. If you have more than one node you should divide with,. 3. For MongoDB cd $YCSB /bin/ycsb load mongodb -P workloads/workloadc -p threads=<thread-numbers> -p recordcount=<recorcount-value> -p mongodb.url=<mongodb.url> -p mongodb.database=<database> -p mongodb.writeconcern=normal -s >load.dat A few notes about this command: <thread-number> : the number of client threads, this is often done to increase the amount of load offered against the database. <recorcount-value>: the total records for this benchmark. For example, when you want to load 10GB data you shout set it to <mongodb.url>: this parameter should point to the mongos of the mongodb. For example mongodb:// : <database>: In Mongodb case, we used it to set database column. You should have database ycsb with collection usertable before running this command. Then all data will be loaded into database ycsb with collection usertable. To create the database and the collection, you can use the following commands: db.runcommand({enablesharding:"ycsb"}); db.runcommand({shardcollection:"ycsb.usertable",key:{ _id:1}}); Read 1. For HBase cd $YCSB sh bin/ycsb run hbase -P workloads/workloadc -p threads=<thread-numbers> -p columnfamily=<family> -p operationcount=<operationcount-value> -p hosts=<hostip> -s>tran.dat 12
13 A few notes about this command: <thread-number> : the number of client threads, this is often done to increase the amount of load offered against the database. <family> : In Hbase case, we used it to set database column. You should have database usertable with column family before running this command. Then all data will be loaded into database usertable with column family. < operationcount-value >: the total operations for this benchmark. For example, when you want to load 10GB data you shout set it to <hostip> : the IP of the hbase s master node. 2. For Cassandra cd $YCSB sh bin/ycsb run cassandra-10 -P workloads/workloadc -p threads=<thread-numbers> -p operationcount=<operationcount-value> -p hosts=<hostips> -s>tran.dat A few notes about this command: <thread-number> : the number of client threads, this is often done to increase the amount of load offered against the database. <operationcount-value> : the total records for this benchmark. For example, when you want to load 10GB data you shout set it to <hostips> : the IP of cassandra s nodes. If you have more than one node you should divide with,. 3. For MongoDB cd $YCSB sh bin/ycsb run mongodb -P workloads/workloadc -p threads=<thread-numbers> -p operationcount=<operationcount-value> -p mongodb.url=<mongodb.url> -p mongodb.database=<database> -p mongodb.writeconcern=normal -p mongodb.maxconnections=<maxconnections> -s>tran.dat A few notes about this command: <thread-number> : the number of client threads, this is often done to increase the amount of load offered against the database. <operationcount-value> : the total records for this benchmark. For example, when you want to load 10GB data you shout set it to <mongodb.url>: this parameter should point to the mongos of the mongodb. For example mongodb:// : <database>: In Mongodb case, we used it to set database column. You should have database ycsb with collection usertable before running this command. Then all data will be loaded into database ycsb with collection usertable. To create the database and the collection, you can use the following commands: db.runcommand({enablesharding:"ycsb"}); db.runcommand({shardcollection:"ycsb.usertable",key:{ _id:1}}); <maxconnections> : the number of the max connections of mongodb. 13
14 5.2.3 Scan 1. For HBase cd $YCSB sh bin/ycsb run hbase -P workloads/workloade -p threads=<thread-numbers> -p columnfamily=<family> -p operationcount=<operationcount-value> -p hosts=<hostip> -p columnfamily=<family> -s>tran.dat A few notes about this command: <thread-number> : the number of client threads, this is often done to increase the amount of load offered against the database. <family> : In Hbase case, we used it to set database column. You should have database usertable with column family before running this command. Then all data will be loaded into database usertable with column family. < operationcount-value >: the total operations for this benchmark. For example, when you want to load 10GB data you shout set it to <hostip> : the IP of the hbase s master node. 2. For Cassandra cd $YCSB sh bin/ycsb run cassandra-10 -P workloads/workloade -p threads=<thread-numbers> -p operationcount=<operationcount-value> -p hosts=<hostips> -s>tran.dat A few notes about this command: <thread-number> : the number of client threads, this is often done to increase the amount of load offered against the database. <operationcount-value> : the total records for this benchmark. For example, when you want to load 10GB data you shout set it to <hostips> : the IP of cassandra s nodes. If you have more than one node you should divide with,. 3. For MongoDB cd $YCSB sh bin/ycsb run mongodb -P workloads/workloade -p threads=<thread-numbers> -p operationcount=<operationcount-value> -p mongodb.url=<mongodb.url> -p mongodb.database=<database> -p mongodb.writeconcern=normal -p mongodb.maxconnections=<maxconnections> -s>tran.dat A few notes about this command: <thread-number> : the number of client threads, this is often done to increase the amount of load offered against the database. <operationcount-value> : the total records for this benchmark. For example, when you want to load 10GB data you shout set it to
15 <mongodb.url>: this parameter should point to the mongos of the mongodb. For example mongodb:// : <database>: In Mongodb case, we used it to set database column. You should have database ycsb with collection usertable before running this command. Then all data will be loaded into database ycsb with collection usertable. To create the database and the collection, you can use the following commands: db.runcommand({enablesharding:"ycsb"}); db.runcommand({shardcollection:"ycsb.usertable",key:{ _id:1}}); <maxconnections> : the number of the max connections of mongodb. 5.3 Relational Query To Prepare Hive and Impala(including impala server and impala shell) should be installed. (Please refer to <<Prerequisite>>) 1. Please decompress the file: BigDataBench_V2.1.tar.gz tar xzf BigDataBench_V2.1.tar.gz 2. Open the DIR: RelationalQuery cd BigDataBench_V2.1/RelationalQuery 3. We use the PDGF-V2 to generate data. Please refer to section 4.3<<Table Data>> You can generate data by Table_datagen Tool. cd BigDataBench_V2.1/DataGenerationTools/Table_datagen java -XX:NewRatio=1 -jar pdgf.jar -l demo-schema.xml -l demo-generation.xml -c -s mv output/../../relationalquery/data-relationalquery 4. Before importing data, please make sure the hive and impala-shell command are available in your shell environment. You can import data to Hive in this manner: Load data to hive. sh prepare_relationalquery.sh [ORDER_TABLE_DATA_PATH] [ITEM_TABLE_DATA_PATH] Parameters: ORDER_TABLE_DATA_PATH: path of the order table; ITEM_TABLE_DATA_PATH: path of the item table. Output: some log info. To Run sh run_relationalquery.sh [hive/impala] [select/aggregation/join] Output: query result 15
16 5.4 Search Engine Search engine is a typical application in big data scenario. Google, Yahoo, and Baidu accept huge amount of requests and offer the related search results every day. A typical search engine crawls data from the web and indexes the crawled data to provide the searching service Search Engine Web Serving SVM Note: Please refer to BigDataBench v1 for more information about these two workloads PageRank The PageRank program now we use is obtained from Hibench. To prepare and generate data: tar xzf BigDataBench_V2.1.tar.gz cd BigDataBench_V2.1/SearchEngine/PageRank sh gendata_pagerank.sh To run: sh run_pagerank.sh [#_of_nodes] [#_of_reducers] [HDFS edge_file_path] [makesym or nosym] Index The Index program now we use is obtained from Hibench. To prepare: tar xzf BigDataBench_V2.1.tar.gz cd BigDataBench_V2.1/SearchEngine/Index sh prepare.sh sh run_index.sh 16
17 5.5 Social Network Social networking service has becoming a new typical big data application. A typical social networking service generally contains a convenient a web serving module for users to view and publish information. Besides, it has an efficient graph computation module to generate graph data (for friend recommendation etc.) for each user Web Serving We use the Web Serving workload of CloudSuite to construct the Web Serving of Social Network System. Please refer to for setup Kmeans The Kmeans program we used is obtained from Mahout. To Prepare tar xzf BigDataBench_V2.1.tar.gz cd BigDataBench_V2.1/SNS/Kmeans sh gendata_kmeans.sh sh run_kmeans.sh Connected Components The Connected Components program we used is obtained from PEGASUS. To Prepare tar xzf BigDataBench_V2.1.tar.gz cd BigDataBench_V2.1/SNS/Connected_Components sh gendata_connectedcomponents.sh sh run_connectedcomponents.sh [#_of_nodes] [#_of_reducers] [HDFS edge_file_path] [#_of_nodes] : number of nodes in the graph [#_of_reducers] : number of reducers to use in hadoop. - The number of reducers to use depends on the setting of the hadoop cluster. - The rule of thumb is to use (number_of_machine * 2) as the number of reducers. [HDFS edge_file_path] : HDFS directory where edge file is located 17
18 5.6 Ecommerce System A typical ecommerce system provides a friendly web serving, reliable database system and rapid data analysis module. The database system stores information of items, users and biddings. Data analysis is an important module of ecommerce system to analyze users feature and then promote the bidding amounts by giving the proper recommendation. With the development of related technologies, E-commerce has already become one of the most important applications on internet. Our big data benchmarking work used various state-of-art algorithms or techniques to access, analyze, store big data and offer E-commerce as follows Ecommerce System Web Serving We use open source tool RUBiS as our Web service and data management workloads Application. Since ten years ago, when E-commerce just became unfolding, it has attracted lot of researchers researching its workload characteristics. RUBiS is an auction site prototype modeled after Ebay that is used to evaluate application design patterns and application servers performance scalability. The site implements the core functionality of an auction site: selling, browsing and bidding. To run a RUBiS system, please reference to the RUBiS website: The download URL is You can change the parameters of the payload generator (RUBiS/Client/rubis.properties) for what you need. vim $RUBiS_HOME/Client/rubis.properties make emulator Collaborative Filtering Recommendation Collaborative filtering recommendation is one of the most widely used algorithms in E-com analysis. It aims to solve the prediction problem where the task is to estimate the preference of a user towards an item which he/she has not yet seen. We use the RecommenderJob in Mahout( as our Recommendation workload, which is a completely distributed itembased recommender. It expects ID1, ID2, value (optional) as inputs, and outputs ID1s with associated recommended ID2s and their scores. As you know, the data set is a kind of graph data. Before you run the RecommenderJob, you must have HADOOP and MAHOUT prepared. You can use Kronecker (see 4.2.1) to generate graph data for RecommenderJob. 18
19 tar xzf BigDataBench_V2.1.tar.gz cd BigDataBench_V2.1/E-commerce sh gendata_recommendator.sh sh run_recommendator.sh Input parameters according to the instructions: 1. The DIR of your working derictor; 2. The DIR of MAHOUT_HOME Bayes Naive Bayes is an algorithm that can be used to classify objects into usually binary categories. It is one of the most common learning algorithms in Classification. Despite its simplicity and rather naive assumptions it has proven to work surprisingly well in practice. We use the naivebayes in Mahout( as our Bayes workload, which is a completely distributed classifier. tar xzf BigDataBench_V2.1.tar.gz cd BigDataBench_V2.1/E-commerce sh gendata_naivebayes.sh sh run_naivebayes.sh Input parameters according to the instructions: 1. The DIR of your working derictor; 2. The DIR of MAHOUT_HOME. 19
20 6 Reference [1] [2] [3] [4] Amazon movie reviews, [5] Big data benchmark by amplab of uc berkeley, [6] facebook graph, [7] google web graph, [8] graph 500 home page, [9] Snap home page, [10] Sort benchmark home page, [11] Standard performance evaluation corporation (spec), [12] Transaction processing performance council (tpc), [13] wikipedia, [14] Ghazal. A, Big data benchmarking-data model proposa, in FirstWorkshop on Big Data Benchmarking, San Jose, Califorina, [15] Timothy G Armstrong, Vamsi Ponnekanti, Dhruba Borthakur, andmark Callaghan, Linkbench: a database benchmark based on thefacebook social graph, [16] L.A. Barroso and U. Hölzle, The datacenter as a computer: An introductionto the design of warehouse-scale machines, Synthesis Lectures on Computer Architecture, vol. 4, no. 1, pp , [17] M. Beyer, Gartner says solving big data challenge involves more than just managing volumes of data. [18] David M Blei, Andrew Y Ng, and Michael I Jordan, Latent dirichlet allocation, the Journal of machine Learning research, vol. 3, pp , [19] Deepayan Chakrabarti and Christos Faloutsos, Graph mining: Laws, generators, and algorithms, ACM Computing Surveys (CSUR), vol. 38, no. 1, p. 2, [20] Deepayan Chakrabarti, Yiping Zhan, and Christos Faloutsos, R-mat: A recursive model for graph mining, Computer Science Department, p. 541, [21] Brian F. Cooper, Adam Silberstein, Erwin Tam, Raghu Ramakrishnan, and Russell Sears, Benchmarking cloud serving systems with ycsb, in Proceedings of the 1st ACM symposium on Cloud computing, ser. SoCC 10. New York, NY, USA: ACM, 2010, pp [Online]. Available: [22] Cliff Engle, Antonio Lupher, Reynold Xin, Matei Zaharia, Michael J Franklin, Scott Shenker, and Ion Stoica, Shark: fast data analysis using coarse-grained distributed memory, in Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data. ACM, 2012, pp [23] Vuk Ercegovac, David J DeWitt, and Raghu Ramakrishnan, The texture benchmark: measuring performance of text queries on a relational dbms, in Proceedings of the 31st international conference on Very large data bases. VLDB Endowment, 2005, pp [24] M. Ferdman, A. Adileh, O. Kocberber, S. Volos, M. Alisafaee, D. Jevdjic, C. Kaynak, A. Popescu, A. Ailamaki, and B. Falsafi, Clearing the clouds: A study of emerging workloads on 20
21 modern hardware, Architectural Support for Programming Languages and Operating Systems, [25] Ahmad Ghazal, Minqing Hu, Tilmann Rabl, Francois Raab, Meikel Poess, Alain Crolotte, and Hans-Arno Jacobsen, Bigbench: Towards an industry standard benchmark for big data analytics, in sigmod ACM, [26] Ahmad Ghazal, Tilmann Rabl, Minqing Hu, Francois Raab, Meikel Poess, Alain Crolotte, and Hans-Arno Jacobsen, Bigbench: Towards an industry standard benchmark for big data analytics, [27] S. Huang, J. Huang, J. Dai, T. Xie, and B. Huang, The hibench benchmark suite: Characterization of the mapreduce-based data analysis, in Data Engineering Workshops (ICDEW), 2010 IEEE 26th International Conference on. IEEE, 2010, pp [28] Jure Leskovec, Deepayan Chakrabarti, Jon Kleinberg, Christos Faloutsos, and Zoubin Ghahramani, Kronecker graphs: An approach to modeling networks, The Journal of Machine Learning Research, vol. 11, pp , [29] Jurij Leskovec, Deepayan Chakrabarti, Jon Kleinberg, and Christos Faloutsos, Realistic, mathematically tractable graph generation and evolution, using kronecker multiplication, in Knowledge Discovery in Databases: PKDD Springer, 2005, pp [30] Pejman Lotfi-Kamran, Boris Grot, Michael Ferdman, Stavros Volos, Onur Kocberber, Javier Picorel, Almutaz Adileh, Djordje Jevdjic, Sachin Idgunji, Emre Ozer et al., Scale-out processors, in Proceedings of the 39th International Symposium on Computer Architecture. IEEE Press, 2012, pp [31] Andrew Pavlo, Erik Paulson, Alexander Rasin, Daniel J. Abadi, David J. DeWitt, Samuel Madden, and Michael Stonebraker, A comparison of approaches to large-scale data analysis, in Proceedings of the 2009 ACM SIGMOD International Conference on Management of data, ser. SIGMOD 09. New York, NY, USA: ACM, 2009, pp [Online]. Available: [32] Tilmann Rabl, Michael Frank, Hatem Mousselly Sergieh, and Harald Kosch, A data generator for cloud-scale benchmarking, in Performance Evaluation, Measurement and Characterization of Complex Systems. Springer, 2011, pp [33] Margo Seltzer, David Krinsky, Keith Smith, and Xiaolan Zhang, The case for application-specific benchmarking, in Hot Topics in Operating Systems, Proceedings of the Seventh Workshop on. IEEE, 1999, pp [34] YC Tay, Data generation for application-specific benchmarking, VLDB, Challenges and Visions, [35] Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael Franklin, Scott Shenker, and Ion Stoica, Resilient distributed datasets: A fault-tolerant abstraction for in-memory cluster computing, in Proceedings of the 9 th USENIX conference on Networked Systems Design and Implementation. USENIX Association, 2012, pp [36] J. Zhan, L. Zhang, N. Sun, L.Wang, Z. Jia, and C. Luo, High volume throughput computing: Identifying and characterizing throughput oriented workloads in data centers, in Parallel and Distributed Processing Symposium Workshops & PhD Forum (IPDPSW), 2012 IEEE 26 th International. IEEE, 2012, pp [37] Zhen Jia, Lei Wang, Jianfeng Zhan, Lixin Zhang, and Chunjie Luo, Characterizing data 21
22 analysis workloads in data centers, 2013 IEEE International Symposium on Workload Characterization (IISWC 2013). 22
On Big Data Benchmarking
On Big Data Benchmarking 1 Rui Han and 2 Xiaoyi Lu 1 Department of Computing, Imperial College London 2 Ohio State University [email protected], [email protected] Abstract Big data systems address
On Big Data Benchmarking
On Big Data Benchmarking 1 Rui Han and 2 Xiaoyi Lu 1 Department of Computing, Imperial College London 2 Ohio State University [email protected], [email protected] Abstract Big data systems address
BDGS: A Scalable Big Data Generator Suite in Big Data Benchmarking
BDGS: A Scalable Big Data Generator Suite in Big Data Benchmarking Zijian Ming 1,2, Chunjie Luo 1, Wanling Gao 1,2, Rui Han 1,3, Qiang Yang 1, Lei Wang 1, and Jianfeng Zhan 1 1 State Key Laboratory Computer
BPOE Research Highlights
BPOE Research Highlights Jianfeng Zhan ICT, Chinese Academy of Sciences 2013-10- 9 http://prof.ict.ac.cn/jfzhan INSTITUTE OF COMPUTING TECHNOLOGY What is BPOE workshop? B: Big Data Benchmarks PO: Performance
Introduction to Big Data! with Apache Spark" UC#BERKELEY#
Introduction to Big Data! with Apache Spark" UC#BERKELEY# This Lecture" The Big Data Problem" Hardware for Big Data" Distributing Work" Handling Failures and Slow Machines" Map Reduce and Complex Jobs"
BigDataBench: a Big Data Benchmark Suite from Internet Services
BigDataBench: a Big Data Benchmark Suite from Internet Services Lei Wang 1,7, Jianfeng Zhan 1, Chunjie Luo 1, Yuqing Zhu 1, Qiang Yang 1, Yongqiang He 2, Wanling Gao 1, Zhen Jia 1, Yingjie Shi 1, Shujie
Survey of the Benchmark Systems and Testing Frameworks For Tachyon-Perf
Survey of the Benchmark Systems and Testing Frameworks For Tachyon-Perf Rong Gu,Qianhao Dong 2014/09/05 0. Introduction As we want to have a performance framework for Tachyon, we need to consider two aspects
VP/GM, Data Center Processing Group. Copyright 2014 Cavium Inc.
VP/GM, Data Center Processing Group Trends Disrupting Server Industry Public & Private Clouds Compute, Network & Storage Virtualization Application Specific Servers Large end users designing server HW
HiBench Introduction. Carson Wang ([email protected]) Software & Services Group
HiBench Introduction Carson Wang ([email protected]) Agenda Background Workloads Configurations Benchmark Report Tuning Guide Background WHY Why we need big data benchmarking systems? WHAT What is
Configuration Manual Yahoo Cloud System Benchmark (YCSB) 24-Mar-14 SEECS-NUST Faria Mehak
Configuration Manual Yahoo Cloud System Benchmark (YCSB) 24-Mar-14 SEECS-NUST Faria Mehak Table of Contents 1 Introduction... 3 1.1 Purpose... 3 1.2 Product Information... 3 2 Installation Manual... 3
Dell In-Memory Appliance for Cloudera Enterprise
Dell In-Memory Appliance for Cloudera Enterprise Hadoop Overview, Customer Evolution and Dell In-Memory Product Details Author: Armando Acosta Hadoop Product Manager/Subject Matter Expert [email protected]/
From GWS to MapReduce: Google s Cloud Technology in the Early Days
Large-Scale Distributed Systems From GWS to MapReduce: Google s Cloud Technology in the Early Days Part II: MapReduce in a Datacenter COMP6511A Spring 2014 HKUST Lin Gu [email protected] MapReduce/Hadoop
Analysing Large Web Log Files in a Hadoop Distributed Cluster Environment
Analysing Large Files in a Hadoop Distributed Cluster Environment S Saravanan, B Uma Maheswari Department of Computer Science and Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham,
HiBench Installation. Sunil Raiyani, Jayam Modi
HiBench Installation Sunil Raiyani, Jayam Modi Last Updated: May 23, 2014 CONTENTS Contents 1 Introduction 1 2 Installation 1 3 HiBench Benchmarks[3] 1 3.1 Micro Benchmarks..............................
Big Data Generation. Tilmann Rabl and Hans-Arno Jacobsen
Big Data Generation Tilmann Rabl and Hans-Arno Jacobsen Middleware Systems Research Group University of Toronto [email protected], [email protected] http://msrg.org Abstract. Big data challenges
CloudRank-D:A Benchmark Suite for Private Cloud Systems
CloudRank-D:A Benchmark Suite for Private Cloud Systems Jing Quan Institute of Computing Technology, Chinese Academy of Sciences and University of Science and Technology of China HVC tutorial in conjunction
BigDataBench. Khushbu Agarwal
BigDataBench Khushbu Agarwal Last Updated: May 23, 2014 CONTENTS Contents 1 What is BigDataBench? [1] 1 1.1 SUMMARY.................................. 1 1.2 METHODOLOGY.............................. 1 2
Big Data Analytics Hadoop and Spark
Big Data Analytics Hadoop and Spark Shelly Garion, Ph.D. IBM Research Haifa 1 What is Big Data? 2 What is Big Data? Big data usually includes data sets with sizes beyond the ability of commonly used software
Spark and Shark. High- Speed In- Memory Analytics over Hadoop and Hive Data
Spark and Shark High- Speed In- Memory Analytics over Hadoop and Hive Data Matei Zaharia, in collaboration with Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Cliff Engle, Michael Franklin, Haoyuan Li,
CSE-E5430 Scalable Cloud Computing Lecture 11
CSE-E5430 Scalable Cloud Computing Lecture 11 Keijo Heljanko Department of Computer Science School of Science Aalto University [email protected] 30.11-2015 1/24 Distributed Coordination Systems Consensus
NoSQL Performance Test In-Memory Performance Comparison of SequoiaDB, Cassandra, and MongoDB
bankmark UG (haftungsbeschränkt) Bahnhofstraße 1 9432 Passau Germany www.bankmark.de [email protected] T +49 851 25 49 49 F +49 851 25 49 499 NoSQL Performance Test In-Memory Performance Comparison of SequoiaDB,
BUDT 758B-0501: Big Data Analytics (Fall 2015) Decisions, Operations & Information Technologies Robert H. Smith School of Business
BUDT 758B-0501: Big Data Analytics (Fall 2015) Decisions, Operations & Information Technologies Robert H. Smith School of Business Instructor: Kunpeng Zhang ([email protected]) Lecture-Discussions:
Evaluating Task Scheduling in Hadoop-based Cloud Systems
2013 IEEE International Conference on Big Data Evaluating Task Scheduling in Hadoop-based Cloud Systems Shengyuan Liu, Jungang Xu College of Computer and Control Engineering University of Chinese Academy
INTRODUCTION TO CASSANDRA
INTRODUCTION TO CASSANDRA This ebook provides a high level overview of Cassandra and describes some of its key strengths and applications. WHAT IS CASSANDRA? Apache Cassandra is a high performance, open
BIG DATA What it is and how to use?
BIG DATA What it is and how to use? Lauri Ilison, PhD Data Scientist 21.11.2014 Big Data definition? There is no clear definition for BIG DATA BIG DATA is more of a concept than precise term 1 21.11.14
Can the Elephants Handle the NoSQL Onslaught?
Can the Elephants Handle the NoSQL Onslaught? Avrilia Floratou, Nikhil Teletia David J. DeWitt, Jignesh M. Patel, Donghui Zhang University of Wisconsin-Madison Microsoft Jim Gray Systems Lab Presented
BDGS: A Scalable Big Data Generator Suite in Big Data Benchmarking. Aayush Agrawal
BDGS: A Scalable Big Data Generator Suite in Big Data Benchmarking Aayush Agrawal Last Updated: May 21, 2014 text CONTENTS Contents 1 Philosophy : 1 2 Requirements : 1 3 Observations : 2 3.1 Text Generator
Spark. Fast, Interactive, Language- Integrated Cluster Computing
Spark Fast, Interactive, Language- Integrated Cluster Computing Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael Franklin, Scott Shenker, Ion Stoica UC
Benchmarking and Analysis of NoSQL Technologies
Benchmarking and Analysis of NoSQL Technologies Suman Kashyap 1, Shruti Zamwar 2, Tanvi Bhavsar 3, Snigdha Singh 4 1,2,3,4 Cummins College of Engineering for Women, Karvenagar, Pune 411052 Abstract The
Performance Evaluation of NoSQL Systems Using YCSB in a resource Austere Environment
International Journal of Applied Information Systems (IJAIS) ISSN : 2249-868 Performance Evaluation of NoSQL Systems Using YCSB in a resource Austere Environment Yusuf Abubakar Department of Computer Science
Hadoop Ecosystem Overview. CMSC 491 Hadoop-Based Distributed Computing Spring 2015 Adam Shook
Hadoop Ecosystem Overview CMSC 491 Hadoop-Based Distributed Computing Spring 2015 Adam Shook Agenda Introduce Hadoop projects to prepare you for your group work Intimate detail will be provided in future
A STUDY ON HADOOP ARCHITECTURE FOR BIG DATA ANALYTICS
A STUDY ON HADOOP ARCHITECTURE FOR BIG DATA ANALYTICS Dr. Ananthi Sheshasayee 1, J V N Lakshmi 2 1 Head Department of Computer Science & Research, Quaid-E-Millath Govt College for Women, Chennai, (India)
An Approach to Implement Map Reduce with NoSQL Databases
www.ijecs.in International Journal Of Engineering And Computer Science ISSN: 2319-7242 Volume 4 Issue 8 Aug 2015, Page No. 13635-13639 An Approach to Implement Map Reduce with NoSQL Databases Ashutosh
Big Data Simulator version
Big Data Simulator version User Manual Website: http://prof.ict.ac.cn/bigdatabench/simulatorversion/ Content 1 Motivation... 3 2 Methodology... 3 3 Architecture subset... 3 3.1 Microarchitectural Metric
FP-Hadoop: Efficient Execution of Parallel Jobs Over Skewed Data
FP-Hadoop: Efficient Execution of Parallel Jobs Over Skewed Data Miguel Liroz-Gistau, Reza Akbarinia, Patrick Valduriez To cite this version: Miguel Liroz-Gistau, Reza Akbarinia, Patrick Valduriez. FP-Hadoop:
Tachyon: Reliable File Sharing at Memory- Speed Across Cluster Frameworks
Tachyon: Reliable File Sharing at Memory- Speed Across Cluster Frameworks Haoyuan Li UC Berkeley Outline Motivation System Design Evaluation Results Release Status Future Directions Outline Motivation
City University of Hong Kong Information on a Course offered by Department of Computer Science with effect from Semester A in 2014 / 2015
City University of Hong Kong Information on a Course offered by Department of Computer Science with effect from Semester A in 2014 / 2015 Part I Course Title: Data-Intensive Computing Course Code: CS4480
Architecture Support for Big Data Analytics
Architecture Support for Big Data Analytics Ahsan Javed Awan EMJD-DC (KTH-UPC) (http://uk.linkedin.com/in/ahsanjavedawan/) Supervisors: Mats Brorsson(KTH), Eduard Ayguade(UPC), Vladimir Vlassov(KTH) 1
Cloud Storage Solution for WSN Based on Internet Innovation Union
Cloud Storage Solution for WSN Based on Internet Innovation Union Tongrang Fan 1, Xuan Zhang 1, Feng Gao 1 1 School of Information Science and Technology, Shijiazhuang Tiedao University, Shijiazhuang,
How To Scale Out Of A Nosql Database
Firebird meets NoSQL (Apache HBase) Case Study Firebird Conference 2011 Luxembourg 25.11.2011 26.11.2011 Thomas Steinmaurer DI +43 7236 3343 896 [email protected] www.scch.at Michael Zwick DI
Getting Started with SandStorm NoSQL Benchmark
Getting Started with SandStorm NoSQL Benchmark SandStorm is an enterprise performance testing tool for web, mobile, cloud and big data applications. It provides a framework for benchmarking NoSQL, Hadoop,
A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM
A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM Sneha D.Borkar 1, Prof.Chaitali S.Surtakar 2 Student of B.E., Information Technology, J.D.I.E.T, [email protected] Assistant Professor, Information
Welcome to the 6 th Workshop on Big Data Benchmarking
Welcome to the 6 th Workshop on Big Data Benchmarking TILMANN RABL MIDDLEWARE SYSTEMS RESEARCH GROUP DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING UNIVERSITY OF TORONTO BANKMARK Please note! This workshop
MapReduce Evaluator: User Guide
University of A Coruña Computer Architecture Group MapReduce Evaluator: User Guide Authors: Jorge Veiga, Roberto R. Expósito, Guillermo L. Taboada and Juan Touriño December 9, 2014 Contents 1 Overview
Hadoop Ecosystem B Y R A H I M A.
Hadoop Ecosystem B Y R A H I M A. History of Hadoop Hadoop was created by Doug Cutting, the creator of Apache Lucene, the widely used text search library. Hadoop has its origins in Apache Nutch, an open
Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases. Lecture 15
Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases Lecture 15 Big Data Management V (Big-data Analytics / Map-Reduce) Chapter 16 and 19: Abideboul et. Al. Demetris
! E6893 Big Data Analytics:! Demo Session II: Mahout working with Eclipse and Maven for Collaborative Filtering
E6893 Big Data Analytics: Demo Session II: Mahout working with Eclipse and Maven for Collaborative Filtering Aonan Zhang Dept. of Electrical Engineering 1 October 9th, 2014 Mahout Brief Review The Apache
Apache Spark and Distributed Programming
Apache Spark and Distributed Programming Concurrent Programming Keijo Heljanko Department of Computer Science University School of Science November 25th, 2015 Slides by Keijo Heljanko Apache Spark Apache
Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases. Lecture 14
Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases Lecture 14 Big Data Management IV: Big-data Infrastructures (Background, IO, From NFS to HFDS) Chapter 14-15: Abideboul
Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related
Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related Summary Xiangzhe Li Nowadays, there are more and more data everyday about everything. For instance, here are some of the astonishing
ESS event: Big Data in Official Statistics. Antonino Virgillito, Istat
ESS event: Big Data in Official Statistics Antonino Virgillito, Istat v erbi v is 1 About me Head of Unit Web and BI Technologies, IT Directorate of Istat Project manager and technical coordinator of Web
What s next for the Berkeley Data Analytics Stack?
What s next for the Berkeley Data Analytics Stack? Michael Franklin June 30th 2014 Spark Summit San Francisco UC BERKELEY AMPLab: Collaborative Big Data Research 60+ Students, Postdocs, Faculty and Staff
Oracle s Big Data solutions. Roger Wullschleger. <Insert Picture Here>
s Big Data solutions Roger Wullschleger DBTA Workshop on Big Data, Cloud Data Management and NoSQL 10. October 2012, Stade de Suisse, Berne 1 The following is intended to outline
Developing Scalable Smart Grid Infrastructure to Enable Secure Transmission System Control
Developing Scalable Smart Grid Infrastructure to Enable Secure Transmission System Control EP/K006487/1 UK PI: Prof Gareth Taylor (BU) China PI: Prof Yong-Hua Song (THU) Consortium UK Members: Brunel University
Automating Big Data Benchmarking for Different Architectures with ALOJA
www.bsc.es Jan 2016 Automating Big Data Benchmarking for Different Architectures with ALOJA Nicolas Poggi, Postdoc Researcher Agenda 1. Intro on Hadoop performance 1. Current scenario and problematic 2.
Big Data and Scripting Systems build on top of Hadoop
Big Data and Scripting Systems build on top of Hadoop 1, 2, Pig/Latin high-level map reduce programming platform Pig is the name of the system Pig Latin is the provided programming language Pig Latin is
TRAINING PROGRAM ON BIGDATA/HADOOP
Course: Training on Bigdata/Hadoop with Hands-on Course Duration / Dates / Time: 4 Days / 24th - 27th June 2015 / 9:30-17:30 Hrs Venue: Eagle Photonics Pvt Ltd First Floor, Plot No 31, Sector 19C, Vashi,
Keywords Big Data; OODBMS; RDBMS; hadoop; EDM; learning analytics, data abundance.
Volume 4, Issue 11, November 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Analytics
VariantSpark: Applying Spark-based machine learning methods to genomic information
VariantSpark: Applying Spark-based machine learning methods to genomic information Aidan R. O BRIEN a a,1 and Denis C. BAUER a CSIRO, Health and Biosecurity Flagship Abstract. Genomic information is increasingly
Mammoth Scale Machine Learning!
Mammoth Scale Machine Learning! Speaker: Robin Anil, Apache Mahout PMC Member! OSCON"10! Portland, OR! July 2010! Quick Show of Hands!# Are you fascinated about ML?!# Have you used ML?!# Do you have Gigabytes
International Journal of Engineering Research ISSN: 2348-4039 & Management Technology November-2015 Volume 2, Issue-6
International Journal of Engineering Research ISSN: 2348-4039 & Management Technology Email: [email protected] November-2015 Volume 2, Issue-6 www.ijermt.org Modeling Big Data Characteristics for Discovering
IMPROVED FAIR SCHEDULING ALGORITHM FOR TASKTRACKER IN HADOOP MAP-REDUCE
IMPROVED FAIR SCHEDULING ALGORITHM FOR TASKTRACKER IN HADOOP MAP-REDUCE Mr. Santhosh S 1, Mr. Hemanth Kumar G 2 1 PG Scholor, 2 Asst. Professor, Dept. Of Computer Science & Engg, NMAMIT, (India) ABSTRACT
SparkBench A Spark Performance Testing Suite
SparkBench A Spark Performance Testing Suite Dakshi Agrawal 1, Ali Butt 2, Kshitij Doshi 5, Josep-L. Larriba-Pey 3, Min Li 1, Frederick R Reiss 1, Francois Raab 4, Berni Schiefer 1, Toyotaro Suzumura 1,
Big Data Analytics - Accelerated. stream-horizon.com
Big Data Analytics - Accelerated stream-horizon.com Legacy ETL platforms & conventional Data Integration approach Unable to meet latency & data throughput demands of Big Data integration challenges Based
Hadoop. http://hadoop.apache.org/ Sunday, November 25, 12
Hadoop http://hadoop.apache.org/ What Is Apache Hadoop? The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using
Tachyon: A Reliable Memory Centric Storage for Big Data Analytics
Tachyon: A Reliable Memory Centric Storage for Big Data Analytics a Haoyuan (HY) Li, Ali Ghodsi, Matei Zaharia, Scott Shenker, Ion Stoica June 30 th, 2014 Spark Summit @ San Francisco UC Berkeley Outline
Big Data and Analytics: Challenges and Opportunities
Big Data and Analytics: Challenges and Opportunities Dr. Amin Beheshti Lecturer and Senior Research Associate University of New South Wales, Australia (Service Oriented Computing Group, CSE) Talk: Sharif
Petabyte Scale Data at Facebook. Dhruba Borthakur, Engineer at Facebook, SIGMOD, New York, June 2013
Petabyte Scale Data at Facebook Dhruba Borthakur, Engineer at Facebook, SIGMOD, New York, June 2013 Agenda 1 Types of Data 2 Data Model and API for Facebook Graph Data 3 SLTP (Semi-OLTP) and Analytics
Scalable Multiple NameNodes Hadoop Cloud Storage System
Vol.8, No.1 (2015), pp.105-110 http://dx.doi.org/10.14257/ijdta.2015.8.1.12 Scalable Multiple NameNodes Hadoop Cloud Storage System Kun Bi 1 and Dezhi Han 1,2 1 College of Information Engineering, Shanghai
Benchmarking and Ranking Big Data Systems
Benchmarking and Ranking Big Data Systems Xinhui Tian ICT, Chinese Academy of Sciences and University of Chinese Academy of Sciences INSTITUTE OF COMPUTING TECHNOLOGY Outline n BigDataBench n BigDataBench
Brave New World: Hadoop vs. Spark
Brave New World: Hadoop vs. Spark Dr. Kurt Stockinger Associate Professor of Computer Science Director of Studies in Data Science Zurich University of Applied Sciences Datalab Seminar, Zurich, Oct. 7,
Graph Processing and Social Networks
Graph Processing and Social Networks Presented by Shu Jiayu, Yang Ji Department of Computer Science and Engineering The Hong Kong University of Science and Technology 2015/4/20 1 Outline Background Graph
MapReduce and Hadoop Distributed File System
MapReduce and Hadoop Distributed File System 1 B. RAMAMURTHY Contact: Dr. Bina Ramamurthy CSE Department University at Buffalo (SUNY) [email protected] http://www.cse.buffalo.edu/faculty/bina Partially
An Industrial Perspective on the Hadoop Ecosystem. Eldar Khalilov Pavel Valov
An Industrial Perspective on the Hadoop Ecosystem Eldar Khalilov Pavel Valov agenda 03.12.2015 2 agenda Introduction 03.12.2015 2 agenda Introduction Research goals 03.12.2015 2 agenda Introduction Research
Data Warehouse design
Data Warehouse design Design of Enterprise Systems University of Pavia 10/12/2013 2h for the first; 2h for hadoop - 1- Table of Contents Big Data Overview Big Data DW & BI Big Data Market Hadoop & Mahout
Unified Big Data Processing with Apache Spark. Matei Zaharia @matei_zaharia
Unified Big Data Processing with Apache Spark Matei Zaharia @matei_zaharia What is Apache Spark? Fast & general engine for big data processing Generalizes MapReduce model to support more types of processing
A Comparison of Approaches to Large-Scale Data Analysis
A Comparison of Approaches to Large-Scale Data Analysis Sam Madden MIT CSAIL with Andrew Pavlo, Erik Paulson, Alexander Rasin, Daniel Abadi, David DeWitt, and Michael Stonebraker In SIGMOD 2009 MapReduce
CactoScale Guide User Guide. Athanasios Tsitsipas (UULM), Papazachos Zafeirios (QUB), Sakil Barbhuiya (QUB)
CactoScale Guide User Guide Athanasios Tsitsipas (UULM), Papazachos Zafeirios (QUB), Sakil Barbhuiya (QUB) Version History Version Date Change Author 0.1 12/10/2014 Initial version Athanasios Tsitsipas(UULM)
NoSQL Databases. Institute of Computer Science Databases and Information Systems (DBIS) DB 2, WS 2014/2015
NoSQL Databases Institute of Computer Science Databases and Information Systems (DBIS) DB 2, WS 2014/2015 Database Landscape Source: H. Lim, Y. Han, and S. Babu, How to Fit when No One Size Fits., in CIDR,
Mining Large Datasets: Case of Mining Graph Data in the Cloud
Mining Large Datasets: Case of Mining Graph Data in the Cloud Sabeur Aridhi PhD in Computer Science with Laurent d Orazio, Mondher Maddouri and Engelbert Mephu Nguifo 16/05/2014 Sabeur Aridhi Mining Large
BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB
BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB Planet Size Data!? Gartner s 10 key IT trends for 2012 unstructured data will grow some 80% over the course of the next
Machine- Learning Summer School - 2015
Machine- Learning Summer School - 2015 Big Data Programming David Franke Vast.com hbp://www.cs.utexas.edu/~dfranke/ Goals for Today Issues to address when you have big data Understand two popular big data
Cloud Application Development (SE808, School of Software, Sun Yat-Sen University) Yabo (Arber) Xu
Lecture 4 Introduction to Hadoop & GAE Cloud Application Development (SE808, School of Software, Sun Yat-Sen University) Yabo (Arber) Xu Outline Introduction to Hadoop The Hadoop ecosystem Related projects
E6893 Big Data Analytics Lecture 2: Big Data Analytics Platforms
E6893 Big Data Analytics Lecture 2: Big Data Analytics Platforms Ching-Yung Lin, Ph.D. Adjunct Professor, Dept. of Electrical Engineering and Computer Science Mgr., Dept. of Network Science and Big Data
Introduction to Hadoop
Introduction to Hadoop 1 What is Hadoop? the big data revolution extracting value from data cloud computing 2 Understanding MapReduce the word count problem more examples MCS 572 Lecture 24 Introduction
SQL VS. NO-SQL. Adapted Slides from Dr. Jennifer Widom from Stanford
SQL VS. NO-SQL Adapted Slides from Dr. Jennifer Widom from Stanford 55 Traditional Databases SQL = Traditional relational DBMS Hugely popular among data analysts Widely adopted for transaction systems
Comparing SQL and NOSQL databases
COSC 6397 Big Data Analytics Data Formats (II) HBase Edgar Gabriel Spring 2015 Comparing SQL and NOSQL databases Types Development History Data Storage Model SQL One type (SQL database) with minor variations
How To Analyze Log Files In A Web Application On A Hadoop Mapreduce System
Analyzing Web Application Log Files to Find Hit Count Through the Utilization of Hadoop MapReduce in Cloud Computing Environment Sayalee Narkhede Department of Information Technology Maharashtra Institute
Yahoo! Cloud Serving Benchmark
Yahoo! Cloud Serving Benchmark Overview and results March 31, 2010 Brian F. Cooper [email protected] Joint work with Adam Silberstein, Erwin Tam, Raghu Ramakrishnan and Russell Sears System setup and
Rakam: Distributed Analytics API
Rakam: Distributed Analytics API Burak Emre Kabakcı May 30, 2014 Abstract Today, most of the big data applications needs to compute data in real-time since the Internet develops quite fast and the users
Comparing Scalable NOSQL Databases
Comparing Scalable NOSQL Databases Functionalities and Measurements Dory Thibault UCL Contact : [email protected] Sponsor : Euranova Website : nosqlbenchmarking.com February 15, 2011 Clarications
Big Data Explained. An introduction to Big Data Science.
Big Data Explained An introduction to Big Data Science. 1 Presentation Agenda What is Big Data Why learn Big Data Who is it for How to start learning Big Data When to learn it Objective and Benefits of
Big Data: Tools and Technologies in Big Data
Big Data: Tools and Technologies in Big Data Jaskaran Singh Student Lovely Professional University, Punjab Varun Singla Assistant Professor Lovely Professional University, Punjab ABSTRACT Big data can
Large scale processing using Hadoop. Ján Vaňo
Large scale processing using Hadoop Ján Vaňo What is Hadoop? Software platform that lets one easily write and run applications that process vast amounts of data Includes: MapReduce offline computing engine
