The Spark Big Data Analytics Platform
|
|
|
- Linette Jennings
- 10 years ago
- Views:
Transcription
1 The Spark Big Data Analytics Platform Amir H. Payberah Amirkabir University of Technology (Tehran Polytechnic) 1393/10/10 Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 1 / 171
2 Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 2 / 171
3 Big Data refers to datasets and flows large enough that has outpaced our capability to store, process, analyze, and understand. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 3 / 171
4 Where Does Big Data Come From? Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 4 / 171
5 Big Data Market Driving Factors The number of web pages indexed by Google, which were around one million in 1998, have exceeded one trillion in 2008, and its expansion is accelerated by appearance of the social networks. Mining big data: current status, and forecast to the future [Wei Fan et al., 2013] Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 5 / 171
6 Big Data Market Driving Factors The amount of mobile data traffic is expected to grow to 10.8 Exabyte per month by Worldwide Big Data Technology and Services Forecast [Dan Vesset et al., 2013] Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 6 / 171
7 Big Data Market Driving Factors More than 65 billion devices were connected to the Internet by 2010, and this number will go up to 230 billion by The Internet of Things Is Coming [John Mahoney et al., 2013] Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 7 / 171
8 Big Data Market Driving Factors Many companies are moving towards using Cloud services to access Big Data analytical tools. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 8 / 171
9 Big Data Market Driving Factors Open source communities Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 9 / 171
10 How To Store and Process Big Data? Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 10 / 171
11 Scale Up vs. Scale Out (1/2) Scale up or scale vertically: adding resources to a single node in a system. Scale out or scale horizontally: adding more nodes to a system. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 11 / 171
12 Scale Up vs. Scale Out (2/2) Scale up: more expensive than scaling out. Scale out: more challenging for fault tolerance and software development. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 12 / 171
13 Taxonomy of Parallel Architectures DeWitt, D. and Gray, J. Parallel database systems: the future of high performance database systems. ACM Communications, 35(6), 85-98, Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 13 / 171
14 Taxonomy of Parallel Architectures DeWitt, D. and Gray, J. Parallel database systems: the future of high performance database systems. ACM Communications, 35(6), 85-98, Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 13 / 171
15 Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 14 / 171
16 Big Data Analytics Stack Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 15 / 171
17 Hadoop Big Data Analytics Stack Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 16 / 171
18 Spark Big Data Analytics Stack Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 17 / 171
19 Big Data - File systems Traditional file-systems are not well-designed for large-scale data processing systems. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 18 / 171
20 Big Data - File systems Traditional file-systems are not well-designed for large-scale data processing systems. Efficiency has a higher priority than other features, e.g., directory service. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 18 / 171
21 Big Data - File systems Traditional file-systems are not well-designed for large-scale data processing systems. Efficiency has a higher priority than other features, e.g., directory service. Massive size of data tends to store it across multiple machines in a distributed way. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 18 / 171
22 Big Data - File systems Traditional file-systems are not well-designed for large-scale data processing systems. Efficiency has a higher priority than other features, e.g., directory service. Massive size of data tends to store it across multiple machines in a distributed way. HDFS/GFS, Amazon S3,... Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 18 / 171
23 Big Data - Database Relational Databases Management Systems (RDMS) were not designed to be distributed. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 19 / 171
24 Big Data - Database Relational Databases Management Systems (RDMS) were not designed to be distributed. NoSQL databases relax one or more of the ACID properties: BASE Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 19 / 171
25 Big Data - Database Relational Databases Management Systems (RDMS) were not designed to be distributed. NoSQL databases relax one or more of the ACID properties: BASE Different data models: key/value, column-family, graph, document. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 19 / 171
26 Big Data - Database Relational Databases Management Systems (RDMS) were not designed to be distributed. NoSQL databases relax one or more of the ACID properties: BASE Different data models: key/value, column-family, graph, document. Hbase/BigTable, Dynamo, Scalaris, Cassandra, MongoDB, Voldemort, Riak, Neo4J,... Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 19 / 171
27 Big Data - Resource Management Different frameworks require different computing resources. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 20 / 171
28 Big Data - Resource Management Different frameworks require different computing resources. Large organizations need the ability to share data and resources between multiple frameworks. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 20 / 171
29 Big Data - Resource Management Different frameworks require different computing resources. Large organizations need the ability to share data and resources between multiple frameworks. Resource management share resources in a cluster between multiple frameworks while providing resource isolation. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 20 / 171
30 Big Data - Resource Management Different frameworks require different computing resources. Large organizations need the ability to share data and resources between multiple frameworks. Resource management share resources in a cluster between multiple frameworks while providing resource isolation. Mesos, YARN, Quincy,... Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 20 / 171
31 Big Data - Execution Engine Scalable and fault tolerance parallel data processing on clusters of unreliable machines. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 21 / 171
32 Big Data - Execution Engine Scalable and fault tolerance parallel data processing on clusters of unreliable machines. Data-parallel programming model for clusters of commodity machines. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 21 / 171
33 Big Data - Execution Engine Scalable and fault tolerance parallel data processing on clusters of unreliable machines. Data-parallel programming model for clusters of commodity machines. MapReduce, Spark, Stratosphere, Dryad, Hyracks,... Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 21 / 171
34 Big Data - Query/Scripting Language Low-level programming of execution engines, e.g., MapReduce, is not easy for end users. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 22 / 171
35 Big Data - Query/Scripting Language Low-level programming of execution engines, e.g., MapReduce, is not easy for end users. Need high-level language to improve the query capabilities of execution engines. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 22 / 171
36 Big Data - Query/Scripting Language Low-level programming of execution engines, e.g., MapReduce, is not easy for end users. Need high-level language to improve the query capabilities of execution engines. It translates user-defined functions to low-level API of the execution engines. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 22 / 171
37 Big Data - Query/Scripting Language Low-level programming of execution engines, e.g., MapReduce, is not easy for end users. Need high-level language to improve the query capabilities of execution engines. It translates user-defined functions to low-level API of the execution engines. Pig, Hive, Shark, Meteor, DryadLINQ, SCOPE,... Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 22 / 171
38 Big Data - Stream Processing Providing users with fresh and low latency results. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 23 / 171
39 Big Data - Stream Processing Providing users with fresh and low latency results. Data Stream Man- Database Management Systems (DBMS) vs. agement Systems (DSMS) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 23 / 171
40 Big Data - Stream Processing Providing users with fresh and low latency results. Data Stream Man- Database Management Systems (DBMS) vs. agement Systems (DSMS) Storm, S4, SEEP, D-Stream, Naiad,... Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 23 / 171
41 Big Data - Graph Processing Many problems are expressed using graphs: sparse computational dependencies, and multiple iterations to converge. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 24 / 171
42 Big Data - Graph Processing Many problems are expressed using graphs: sparse computational dependencies, and multiple iterations to converge. Data-parallel frameworks, such as MapReduce, are not ideal for these problems: slow Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 24 / 171
43 Big Data - Graph Processing Many problems are expressed using graphs: sparse computational dependencies, and multiple iterations to converge. Data-parallel frameworks, such as MapReduce, are not ideal for these problems: slow Graph processing frameworks are optimized for graph-based problems. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 24 / 171
44 Big Data - Graph Processing Many problems are expressed using graphs: sparse computational dependencies, and multiple iterations to converge. Data-parallel frameworks, such as MapReduce, are not ideal for these problems: slow Graph processing frameworks are optimized for graph-based problems. Pregel, Giraph, GraphX, GraphLab, PowerGraph, GraphChi,... Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 24 / 171
45 Big Data - Machine Learning Implementing and consuming machine learning techniques at scale are difficult tasks for developers and end users. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 25 / 171
46 Big Data - Machine Learning Implementing and consuming machine learning techniques at scale are difficult tasks for developers and end users. There exist platforms that address it by providing scalable machinelearning and data mining libraries. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 25 / 171
47 Big Data - Machine Learning Implementing and consuming machine learning techniques at scale are difficult tasks for developers and end users. There exist platforms that address it by providing scalable machinelearning and data mining libraries. Mahout, MLBase, SystemML, Ricardo, Presto,... Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 25 / 171
48 Big Data - Configuration and Synchronization Service A means to synchronize distributed applications accesses to shared resources. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 26 / 171
49 Big Data - Configuration and Synchronization Service A means to synchronize distributed applications accesses to shared resources. Allows distributed processes to coordinate with each other. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 26 / 171
50 Big Data - Configuration and Synchronization Service A means to synchronize distributed applications accesses to shared resources. Allows distributed processes to coordinate with each other. Zookeeper, Chubby,... Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 26 / 171
51 Outline Introduction to HDFS Data processing with MapReduce Introduction to Scala Data exploration using Spark Stream processing with Spark Streaming Graph analytics with GraphX Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 27 / 171
52 Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 28 / 171
53 What is Filesystem? Controls how data is stored in and retrieved from disk. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 29 / 171
54 What is Filesystem? Controls how data is stored in and retrieved from disk. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 29 / 171
55 Distributed Filesystems When data outgrows the storage capacity of a single machine: partition it across a number of separate machines. Distributed filesystems: manage the storage across a network of machines. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 30 / 171
56 HDFS Hadoop Distributed FileSystem Appears as a single disk Runs on top of a native filesystem, e.g., ext3 Fault tolerant: can handle disk crashes, machine crashes,... Based on Google s filesystem GFS Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 31 / 171
57 HDFS is Good for... Storing large files Terabytes, Petabytes, etc MB or more per file. Streaming data access Data is written once and read many times. Optimized for batch reads rather than random reads. Cheap commodity hardware No need for super-computers, use less reliable commodity hardware. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 32 / 171
58 HDFS is Not Good for... Low-latency reads High-throughput rather than low latency for small chunks of data. HBase addresses this issue. Large amount of small files Better for millions of large files instead of billions of small files. Multiple writers Single writer per file. Writes only at the end of file, no-support for arbitrary offset. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 33 / 171
59 HDFS Daemons (1/2) HDFS cluster is manager by three types of processes. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 34 / 171
60 HDFS Daemons (1/2) HDFS cluster is manager by three types of processes. Namenode Manages the filesystem, e.g., namespace, meta-data, and file blocks Metadata is stored in memory. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 34 / 171
61 HDFS Daemons (1/2) HDFS cluster is manager by three types of processes. Namenode Manages the filesystem, e.g., namespace, meta-data, and file blocks Metadata is stored in memory. Datanode Stores and retrieves data blocks Reports to Namenode Runs on many machines Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 34 / 171
62 HDFS Daemons (1/2) HDFS cluster is manager by three types of processes. Namenode Manages the filesystem, e.g., namespace, meta-data, and file blocks Metadata is stored in memory. Datanode Stores and retrieves data blocks Reports to Namenode Runs on many machines Secondary Namenode Only for checkpointing. Not a backup for Namenode Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 34 / 171
63 HDFS Daemons (2/2) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 35 / 171
64 Files and Blocks (1/2) Files are split into blocks. Blocks Single unit of storage: a contiguous piece of information on a disk. Transparent to user. Managed by Namenode, stored by Datanode. Blocks are traditionally either 64MB or 128MB: default is 64MB. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 36 / 171
65 Files and Blocks (2/2) Same block is replicated on multiple machines: default is 3 Replica placements are rack aware. 1st replica on the local rack. 2nd replica on the local rack but different machine. 3rd replica on the different rack. Namenode determines replica placement. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 37 / 171
66 HDFS Client Client interacts with Namenode To update the Namenode namespace. To retrieve block locations for writing and reading. Client interacts directly with Datanode To read and write data. Namenode does not directly write or read data. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 38 / 171
67 HDFS Write 1. Create a new file in the Namenode s Namespace; calculate block topology. 2, 3, 4. Stream data to the first, second and third node. 5, 6, 7. Success/failure acknowledgment. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 39 / 171
68 HDFS Read 1. Retrieve block locations. 2, 3. Read blocks to re-assemble the file. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 40 / 171
69 Namenode Memory Concerns For fast access Namenode keeps all block metadata in-memory. Will work well for clusters of 100 machines. Changing block size will affect how much space a cluster can host. 64MB to 128MB will reduce the number of blocks and increase the space that Namenode can support. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 41 / 171
70 HDFS Federation Hadoop 2+ Each Namenode will host part of the blocks. A Block Pool is a set of blocks that belong to a single namespace. Support for machine clusters. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 42 / 171
71 Namenode Fault-Tolerance (1/2) Namenode is a single point of failure. If Namenode crashes then cluster is down. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 43 / 171
72 Namenode Fault-Tolerance (1/2) Namenode is a single point of failure. If Namenode crashes then cluster is down. Secondary Namenode periodically merges the namespace image and log and a persistent record of it written to disk (checkpointing). But, the state of the secondary Namenode lags that of the primary: does not provide high-availability of the filesystem Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 43 / 171
73 Namenode Fault-Tolerance (2/2) High availability Namenode. Hadoop 2+ Active standby is always running and takes over in case main Namenode fails. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 44 / 171
74 HDFS Installation and Shell Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 45 / 171
75 HDFS Installation Three options Local (Standalone) Mode Pseudo-Distributed Mode Fully-Distributed Mode Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 46 / 171
76 Installation - Local Default configuration after the download. Executes as a single Java process. Works directly with local filesystem. Useful for debugging. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 47 / 171
77 Installation - Pseudo-Distributed (1/6) Still runs on a single node. Each daemon runs in its own Java process. Namenode Secondary Namenode Datanode Configuration files: hadoop-env.sh core-site.xml hdfs-site.xml Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 48 / 171
78 Installation - Pseudo-Distributed (2/6) Specify environment variables in hadoop-env.sh export JAVA_HOME=/opt/jdk1.7.0_51 Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 49 / 171
79 Installation - Pseudo-Distributed (3/6) Specify location of Namenode in core-site.sh <property> <name>fs.defaultfs</name> <value>hdfs://localhost:8020</value> <description>namenode URI</description> </property> Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 50 / 171
80 Installation - Pseudo-Distributed (4/6) Configurations of Namenode in hdfs-site.sh Path on the local filesystem where the Namenode stores the namespace and transaction logs persistently. <property> <name>dfs.namenode.name.dir</name> <value>/opt/hadoop-2.2.0/hdfs/namenode</value> <description>description...</description> </property> Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 51 / 171
81 Installation - Pseudo-Distributed (5/6) Configurations of Secondary Namenode in hdfs-site.sh Path on the local filesystem where the Secondary Namenode stores the temporary images to merge. <property> <name>dfs.namenode.checkpoint.dir</name> <value>/opt/hadoop-2.2.0/hdfs/secondary</value> <description>description...</description> </property> Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 52 / 171
82 Installation - Pseudo-Distributed (6/6) Configurations of Datanode in hdfs-site.sh Comma separated list of paths on the local filesystem of a Datanode where it should store its blocks. <property> <name>dfs.datanode.data.dir</name> <value>/opt/hadoop-2.2.0/hdfs/datanode</value> <description>description...</description> </property> Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 53 / 171
83 Start HDFS and Test Format the Namenode directory (do this only once, the first time). hdfs namenode -format Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 54 / 171
84 Start HDFS and Test Format the Namenode directory (do this only once, the first time). hdfs namenode -format Start the Namenode, Secondary namenode and Datanode daemons. hadoop-daemon.sh start namenode hadoop-daemon.sh start secondarynamenode hadoop-daemon.sh start datanode jps Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 54 / 171
85 Start HDFS and Test Format the Namenode directory (do this only once, the first time). hdfs namenode -format Start the Namenode, Secondary namenode and Datanode daemons. hadoop-daemon.sh start namenode hadoop-daemon.sh start secondarynamenode hadoop-daemon.sh start datanode jps Verify the deamons are running: Namenode: Secondary Namenode: Datanode: Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 54 / 171
86 HDFS Shell hdfs dfs -<command> -<option> <path> Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 55 / 171
87 HDFS Shell hdfs dfs -<command> -<option> <path> hdfs dfs -ls / hdfs dfs -ls file:///home/big hdfs dfs -ls hdfs://localhost/ hdfs dfs -cat /dir/file.txt hdfs dfs -cp /dir/file1 /otherdir/file2 hdfs dfs -mv /dir/file1 /dir2/file2 hdfs dfs -mkdir /newdir hdfs dfs -put file.txt /dir/file.txt # can also use copyfromlocal hdfs dfs -get /dir/file.txt file.txt # can also use copytolocal hdfs dfs -rm /dir/filetodelete hdfs dfs -help Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 55 / 171
88 Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 56 / 171
89 MapReduce A shared nothing architecture for processing large data sets with a parallel/distributed algorithm on clusters. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 57 / 171
90 MapReduce Definition A programming model: to batch process large data sets (inspired by functional programming). Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 58 / 171
91 MapReduce Definition A programming model: to batch process large data sets (inspired by functional programming). An execution framework: to run parallel algorithms on clusters of commodity hardware. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 58 / 171
92 Simplicity Don t worry about parallelization, fault tolerance, data distribution, and load balancing (MapReduce takes care of these). Hide system-level details from programmers. Simplicity! Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 59 / 171
93 Programming Model Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 60 / 171
94 MapReduce Dataflow map function: processes data and generates a set of intermediate key/value pairs. reduce function: merges all intermediate values associated with the same intermediate key. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 61 / 171
95 Example: Word Count Consider doing a word count of the following file using MapReduce: Hello World Bye World Hello Hadoop Goodbye Hadoop Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 62 / 171
96 Example: Word Count - map The map function reads in words one a time and outputs (word, 1) for each parsed input word. The map function output is: (Hello, 1) (World, 1) (Bye, 1) (World, 1) (Hello, 1) (Hadoop, 1) (Goodbye, 1) (Hadoop, 1) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 63 / 171
97 Example: Word Count - shuffle The shuffle phase between map and reduce phase creates a list of values associated with each key. The reduce function input is: (Bye, (1)) (Goodbye, (1)) (Hadoop, (1, 1) (Hello, (1, 1)) (World, (1, 1)) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 64 / 171
98 Example: Word Count - reduce The reduce function sums the numbers in the list for each key and outputs (word, count) pairs. The output of the reduce function is the output of the MapReduce job: (Bye, 1) (Goodbye, 1) (Hadoop, 2) (Hello, 2) (World, 2) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 65 / 171
99 Combiner Function (1/2) In some cases, there is significant repetition in the intermediate keys produced by each map task, and the reduce function is commutative and associative. Machine 1: (Hello, 1) (World, 1) (Bye, 1) (World, 1) Machine 2: (Hello, 1) (Hadoop, 1) (Goodbye, 1) (Hadoop, 1) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 66 / 171
100 Combiner Function (2/2) Users can specify an optional combiner function to merge partially data before it is sent over the network to the reduce function. Typically the same code is used to implement both the combiner and the reduce function. Machine 1: (Hello, 1) (World, 2) (Bye, 1) Machine 2: (Hello, 1) (Hadoop, 2) (Goodbye, 1) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 67 / 171
101 Example: Word Count - map public static class MyMap extends Mapper<...> { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(longwritable key, Text value, Context context) throws IOException, InterruptedException { String line = value.tostring(); StringTokenizer tokenizer = new StringTokenizer(line); while (tokenizer.hasmoretokens()) { word.set(tokenizer.nexttoken()); context.write(word, one); } } } Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 68 / 171
102 Example: Word Count - reduce public static class MyReduce extends Reducer<...> { public void reduce(text key, Iterator<...> values, Context context) throws IOException, InterruptedException { int sum = 0; while (values.hasnext()) sum += values.next().get(); } } context.write(key, new IntWritable(sum)); Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 69 / 171
103 Example: Word Count - driver public static void main(string[] args) throws Exception { Configuration conf = new Configuration(); Job job = new Job(conf, "wordcount"); job.setoutputkeyclass(text.class); job.setoutputvalueclass(intwritable.class); job.setmapperclass(mymap.class); job.setcombinerclass(myreduce.class); job.setreducerclass(myreduce.class); job.setinputformatclass(textinputformat.class); job.setoutputformatclass(textoutputformat.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); } job.waitforcompletion(true); Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 70 / 171
104 Example: Word Count - Compile and Run (1/2) # start hdfs > hadoop-daemon.sh start namenode > hadoop-daemon.sh start datanode # make the input folder in hdfs > hdfs dfs -mkdir -p input # copy input files from local filesystem into hdfs > hdfs dfs -put file0 input/file0 > hdfs dfs -put file1 input/file1 > hdfs dfs -ls input/ input/file0 input/file1 > hdfs dfs -cat input/file0 Hello World Bye World > hdfs dfs -cat input/file1 Hello Hadoop Goodbye Hadoop Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 71 / 171
105 Example: Word Count - Compile and Run (2/2) > mkdir wordcount_classes > javac -classpath $HADOOP_HOME/share/hadoop/common/hadoop-common jar: $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-client-core jar: $HADOOP_HOME/share/hadoop/common/lib/commons-cli-1.2.jar -d wordcount_classes sics/wordcount.java > jar -cvf wordcount.jar -C wordcount_classes/. > hadoop jar wordcount.jar sics.wordcount input output > hdfs dfs -ls output output/part > hdfs dfs -cat output/part Bye 1 Goodbye 1 Hadoop 2 Hello 2 World 2 Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 72 / 171
106 Execution Engine Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 73 / 171
107 MapReduce Execution (1/7) The user program divides the input files into M splits. A typical size of a split is the size of a HDFS block (64 MB). Converts them to key/value pairs. It starts up many copies of the program on a cluster of machines. J. Dean and S. Ghemawat, MapReduce: simplified data processing on large clusters, ACM Communications 51(1), Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 74 / 171
108 MapReduce Execution (2/7) One of the copies of the program is master, and the rest are workers. The master assigns works to the workers. It picks idle workers and assigns each one a map task or a reduce task. J. Dean and S. Ghemawat, MapReduce: simplified data processing on large clusters, ACM Communications 51(1), Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 75 / 171
109 MapReduce Execution (3/7) A map worker reads the contents of the corresponding input splits. It parses key/value pairs out of the input data and passes each pair to the user defined map function. The intermediate key/value pairs produced by the map function are buffered in memory. J. Dean and S. Ghemawat, MapReduce: simplified data processing on large clusters, ACM Communications 51(1), Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 76 / 171
110 MapReduce Execution (4/7) The buffered pairs are periodically written to local disk. They are partitioned into R regions (hash(key) mod R). The locations of the buffered pairs on the local disk are passed back to the master. The master forwards these locations to the reduce workers. J. Dean and S. Ghemawat, MapReduce: simplified data processing on large clusters, ACM Communications 51(1), Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 77 / 171
111 MapReduce Execution (5/7) A reduce worker reads the buffered data from the local disks of the map workers. When a reduce worker has read all intermediate data, it sorts it by the intermediate keys. J. Dean and S. Ghemawat, MapReduce: simplified data processing on large clusters, ACM Communications 51(1), Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 78 / 171
112 MapReduce Execution (6/7) The reduce worker iterates over the intermediate data. For each unique intermediate key, it passes the key and the corresponding set of intermediate values to the user defined reduce function. The output of the reduce function is appended to a final output file for this reduce partition. J. Dean and S. Ghemawat, MapReduce: simplified data processing on large clusters, ACM Communications 51(1), Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 79 / 171
113 MapReduce Execution (7/7) When all map tasks and reduce tasks have been completed, the master wakes up the user program. J. Dean and S. Ghemawat, MapReduce: simplified data processing on large clusters, ACM Communications 51(1), Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 80 / 171
114 Hadoop MapReduce and HDFS Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 81 / 171
115 Fault Tolerance On worker failure: Detect failure via periodic heartbeats. Re-execute in-progress map and reduce tasks. Re-execute completed map tasks: their output is stored on the local disk of the failed machine and is therefore inaccessible. Completed reduce tasks do not need to be re-executed since their output is stored in a global filesystem. On master failure: State is periodically checkpointed: a new copy of master starts from the last checkpoint state. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 82 / 171
116 Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 83 / 171
117 Scala Scala: scalable language A blend of object-oriented and functional programming Runs on the Java Virtual Machine Designed by Martin Odersky at EPFL Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 84 / 171
118 Functional Programming Languages In a restricted sense: a language that does not have mutable variables, assignments, or imperative control structures. In a wider sense: it enables the construction of programs that focus on functions. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 85 / 171
119 Functional Programming Languages In a restricted sense: a language that does not have mutable variables, assignments, or imperative control structures. In a wider sense: it enables the construction of programs that focus on functions. Functions are first-class citizens: Defined anywhere (including inside other functions). Passed as parameters to functions and returned as results. Operators to compose functions. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 85 / 171
120 Scala Variables Values: immutable Variables: mutable var myvar: Int = 0 val myval: Int = 1 Scala data types: Boolean, Byte, Short, Char, Int, Long, Float, Double, String Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 86 / 171
121 If... Else var x = 30; if (x == 10) { println("value of X is 10"); } else if (x == 20) { println("value of X is 20"); } else { println("this is else statement"); } Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 87 / 171
122 Loop var a = 0 var b = 0 for (a <- 1 to 3; b <- 1 until 3) { println("value of a: " + a + ", b: " + b ) } // loop with collections val numlist = List(1, 2, 3, 4, 5, 6) for (a <- numlist) { println("value of a: " + a) } Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 88 / 171
123 Functions def functionname([list of parameters]): [return type] = { function body return [expr] } def addint(a: Int, b: Int): Int = { var sum: Int = 0 sum = a + b sum } println("returned Value: " + addint(5, 7)) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 89 / 171
124 Anonymous Functions Lightweight syntax for defining functions. var mul = (x: Int, y: Int) => x * y println(mul(3, 4)) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 90 / 171
125 Higher-Order Functions def apply(f: Int => String, v: Int) = f(v) def layout(x: Int) = "[" + x.tostring() + "]" println(apply(layout, 10)) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 91 / 171
126 Collections (1/2) Array: fixed-size sequential collection of elements of the same type val t = Array("zero", "one", "two") val b = t(0) // b = zero Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 92 / 171
127 Collections (1/2) Array: fixed-size sequential collection of elements of the same type val t = Array("zero", "one", "two") val b = t(0) // b = zero List: sequential collection of elements of the same type val t = List("zero", "one", "two") val b = t(0) // b = zero Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 92 / 171
128 Collections (1/2) Array: fixed-size sequential collection of elements of the same type val t = Array("zero", "one", "two") val b = t(0) // b = zero List: sequential collection of elements of the same type val t = List("zero", "one", "two") val b = t(0) // b = zero Set: sequential collection of elements of the same type without duplicates val t = Set("zero", "one", "two") val t.contains("zero") Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 92 / 171
129 Collections (2/2) Map: collection of key/value pairs val m = Map(1 -> "sics", 2 -> "kth") val b = m(1) // b = sics Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 93 / 171
130 Collections (2/2) Map: collection of key/value pairs val m = Map(1 -> "sics", 2 -> "kth") val b = m(1) // b = sics Tuple: A fixed number of items of different types together val t = (1, "hello") val b = t._1 // b = 1 val c = t._2 // c = hello Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 93 / 171
131 Functional Combinators map: applies a function over each element in the list val numbers = List(1, 2, 3, 4) numbers.map(i => i * 2) // List(2, 4, 6, 8) flatten: it collapses one level of nested structure List(List(1, 2), List(3, 4)).flatten // List(1, 2, 3, 4) flatmap: map + flatten foreach: it is like map but returns nothing Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 94 / 171
132 Classes and Objects class Calculator { val brand: String = "HP" def add(m: Int, n: Int): Int = m + n } val calc = new Calculator calc.add(1, 2) println(calc.brand) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 95 / 171
133 Classes and Objects class Calculator { val brand: String = "HP" def add(m: Int, n: Int): Int = m + n } val calc = new Calculator calc.add(1, 2) println(calc.brand) A singleton is a class that can have only one instance. object Test { def main(args: Array[String]) {... } } Test.main(null) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 95 / 171
134 Case Classes and Pattern Matching Case classes are used to store and match on the contents of a class. They are designed to be used with pattern matching. You can construct them without using new. case class Calc(brand: String, model: String) def calctype(calc: Calc) = calc match { case Calc("hp", "20B") => "financial" case Calc("hp", "48G") => "scientific" case Calc("hp", "30B") => "business" case _ => "Calculator of unknown type" } calctype(calc("hp", "20B")) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 96 / 171
135 Simple Build Tool (SBT) An open source build tool for Scala and Java projects. Similar to Java s Maven or Ant. It is written in Scala. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 97 / 171
136 SBT - Hello World! // make dir hello and edit Hello.scala object Hello { def main(args: Array[String]) { println("hello world.") } } $ cd hello $ sbt compile run Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 98 / 171
137 Common Commands compile: compiles the main sources. run <argument>*: run the main class. package: creates a jar file. console: starts the Scala interpreter. clean: deletes all generated files. help <command>: displays detailed help for the specified command. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/10 99 / 171
138 Create a Simple Project Create project directory. Create src/main/scala directory. Create build.sbt in the project root. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
139 build.sbt A list of Scala expressions, separated by blank lines. Located in the project s base directory. $ cat build.sbt name := "hello" version := "1.0" scalaversion := "2.10.4" Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
140 Add Dependencies Add in build.sbt. Module ID format: "groupid" %% "artifact" % "version" % "configuration" librarydependencies += "org.apache.spark" %% "spark-core" % "1.0.0" // multiple dependencies librarydependencies ++= Seq( "org.apache.spark" %% "spark-core" % "1.0.0", "org.apache.spark" %% "spark-streaming" % "1.0.0" ) sbt uses the standard Maven2 repository by default, but you can add more resolvers. resolvers += "Akka Repository" at " Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
141 Scala Hands-on Exercises (1/3) Declare a list of integers as a variable called mynumbers Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
142 Scala Hands-on Exercises (1/3) Declare a list of integers as a variable called mynumbers val mynumbers = List(1, 2, 5, 4, 7, 3) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
143 Scala Hands-on Exercises (1/3) Declare a list of integers as a variable called mynumbers val mynumbers = List(1, 2, 5, 4, 7, 3) Declare a function, pow, that computes the second power of an Int Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
144 Scala Hands-on Exercises (1/3) Declare a list of integers as a variable called mynumbers val mynumbers = List(1, 2, 5, 4, 7, 3) Declare a function, pow, that computes the second power of an Int def pow(a: Int): Int = a * a Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
145 Scala Hands-on Exercises (2/3) Apply the function to mynumbers using the map function Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
146 Scala Hands-on Exercises (2/3) Apply the function to mynumbers using the map function mynumbers.map(x => pow(x)) // or mynumbers.map(pow(_)) // or mynumbers.map(pow) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
147 Scala Hands-on Exercises (2/3) Apply the function to mynumbers using the map function mynumbers.map(x => pow(x)) // or mynumbers.map(pow(_)) // or mynumbers.map(pow) Write the pow function inline in a map call, using closure notation Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
148 Scala Hands-on Exercises (2/3) Apply the function to mynumbers using the map function mynumbers.map(x => pow(x)) // or mynumbers.map(pow(_)) // or mynumbers.map(pow) Write the pow function inline in a map call, using closure notation mynumbers.map(x => x * x) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
149 Scala Hands-on Exercises (2/3) Apply the function to mynumbers using the map function mynumbers.map(x => pow(x)) // or mynumbers.map(pow(_)) // or mynumbers.map(pow) Write the pow function inline in a map call, using closure notation mynumbers.map(x => x * x) Iterate through mynumbers and print out its items Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
150 Scala Hands-on Exercises (2/3) Apply the function to mynumbers using the map function mynumbers.map(x => pow(x)) // or mynumbers.map(pow(_)) // or mynumbers.map(pow) Write the pow function inline in a map call, using closure notation mynumbers.map(x => x * x) Iterate through mynumbers and print out its items for (i <- mynumbers) println(i) // or mynumbers.foreach(println) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
151 Scala Hands-on Exercises (3/3) Declare a list of pair of string and integers as a variable called mylist Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
152 Scala Hands-on Exercises (3/3) Declare a list of pair of string and integers as a variable called mylist val mylist = List[(String, Int)](("a", 1), ("b", 2), ("c", 3)) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
153 Scala Hands-on Exercises (3/3) Declare a list of pair of string and integers as a variable called mylist val mylist = List[(String, Int)](("a", 1), ("b", 2), ("c", 3)) Write an inline function to increment the integer values of the list mylist Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
154 Scala Hands-on Exercises (3/3) Declare a list of pair of string and integers as a variable called mylist val mylist = List[(String, Int)](("a", 1), ("b", 2), ("c", 3)) Write an inline function to increment the integer values of the list mylist val x = v.map { case (name, age) => age + 1 } // or val x = v.map(i => i._2 + 1) // or val x = v.map(_._2 + 1) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
155 Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
156 What is Spark? An efficient distributed general-purpose data analysis platform. Focusing on ease of programming and high performance. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
157 Motivation MapReduce programming model has not been designed for complex operations, e.g., data mining. Very expensive, i.e., always goes to disk and HDFS. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
158 Solution Extends MapReduce with more operators. Support for advanced data flow graphs. In-memory and out-of-core processing. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
159 Spark vs. Hadoop Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
160 Spark vs. Hadoop Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
161 Spark vs. Hadoop Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
162 Spark vs. Hadoop Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
163 Resilient Distributed Datasets (RDD) (1/2) A distributed memory abstraction. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
164 Resilient Distributed Datasets (RDD) (1/2) A distributed memory abstraction. Immutable collections of objects spread across a cluster. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
165 Resilient Distributed Datasets (RDD) (2/2) An RDD is divided into a number of partitions, which are atomic pieces of information. Partitions of an RDD can be stored on different nodes of a cluster. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
166 RDD Operators Higher-order functions: transformations and actions. Transformations: lazy operators that create new RDDs. Actions: launch a computation and return a value to the program or write data to the external storage. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
167 Transformations vs. Actions Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
168 RDD Transformations - Map All pairs are independently processed. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
169 RDD Transformations - Map All pairs are independently processed. // passing each element through a function. val nums = sc.parallelize(array(1, 2, 3)) val squares = nums.map(x => x * x) // {1, 4, 9} Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
170 RDD Transformations - GroupBy Pairs with identical key are grouped. Groups are independently processed. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
171 RDD Transformations - GroupBy Pairs with identical key are grouped. Groups are independently processed. val schools = sc.parallelize(seq(("sics", 1), ("kth", 1), ("sics", 2))) schools.groupbykey() // {("sics", (1, 2)), ("kth", (1))} schools.reducebykey((x, y) => x + y) // {("sics", 3), ("kth", 1)} Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
172 RDD Transformations - Join Performs an equi-join on the key. Join candidates are independently processed. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
173 RDD Transformations - Join Performs an equi-join on the key. Join candidates are independently processed. val list1 = sc.parallelize(seq(("sics", "10"), ("kth", "50"), ("sics", "20"))) val list2 = sc.parallelize(seq(("sics", "upsala"), ("kth", "stockholm"))) list1.join(list2) // ("sics", ("10", "upsala")) // ("sics", ("20", "upsala")) // ("kth", ("50", "stockholm")) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
174 Basic RDD Actions Return all the elements of the RDD as an array. val nums = sc.parallelize(array(1, 2, 3)) nums.collect() // Array(1, 2, 3) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
175 Basic RDD Actions Return all the elements of the RDD as an array. val nums = sc.parallelize(array(1, 2, 3)) nums.collect() // Array(1, 2, 3) Return an array with the first n elements of the RDD. nums.take(2) // Array(1, 2) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
176 Basic RDD Actions Return all the elements of the RDD as an array. val nums = sc.parallelize(array(1, 2, 3)) nums.collect() // Array(1, 2, 3) Return an array with the first n elements of the RDD. nums.take(2) // Array(1, 2) Return the number of elements in the RDD. nums.count() // 3 Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
177 Basic RDD Actions Return all the elements of the RDD as an array. val nums = sc.parallelize(array(1, 2, 3)) nums.collect() // Array(1, 2, 3) Return an array with the first n elements of the RDD. nums.take(2) // Array(1, 2) Return the number of elements in the RDD. nums.count() // 3 Aggregate the elements of the RDD using the given function. nums.reduce((x, y) => x + y) // 6 Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
178 Creating RDDs Turn a collection into an RDD. val a = sc.parallelize(array(1, 2, 3)) Load text file from local FS, HDFS, or S3. val a = sc.textfile("file.txt") val b = sc.textfile("directory/*.txt") val c = sc.textfile("hdfs://namenode:9000/path/file") Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
179 SparkContext Main entry point to Spark functionality. Available in shell as variable sc. In standalone programs, you should make your own. import org.apache.spark.sparkcontext import org.apache.spark.sparkcontext._ val sc = new SparkContext(master, appname, [sparkhome], [jars]) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
180 Spark Hands-on Exercises (1/3) Read data from a text file and create an RDD named pagecounts. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
181 Spark Hands-on Exercises (1/3) Read data from a text file and create an RDD named pagecounts. val pagecounts = sc.textfile("hamlet") Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
182 Spark Hands-on Exercises (1/3) Read data from a text file and create an RDD named pagecounts. val pagecounts = sc.textfile("hamlet") Get the first 10 lines of the text file. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
183 Spark Hands-on Exercises (1/3) Read data from a text file and create an RDD named pagecounts. val pagecounts = sc.textfile("hamlet") Get the first 10 lines of the text file. pagecounts.take(10).foreach(println) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
184 Spark Hands-on Exercises (1/3) Read data from a text file and create an RDD named pagecounts. val pagecounts = sc.textfile("hamlet") Get the first 10 lines of the text file. pagecounts.take(10).foreach(println) Count the total records in the data set pagecounts. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
185 Spark Hands-on Exercises (1/3) Read data from a text file and create an RDD named pagecounts. val pagecounts = sc.textfile("hamlet") Get the first 10 lines of the text file. pagecounts.take(10).foreach(println) Count the total records in the data set pagecounts. pagecounts.count Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
186 Spark Hands-on Exercises (2/3) Filter the data set pagecounts and return the items that have the word this, and cache in the memory. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
187 Spark Hands-on Exercises (2/3) Filter the data set pagecounts and return the items that have the word this, and cache in the memory. val lineswiththis = pagecounts.filter(line => line.contains("this")).cache \\ or val lineswiththis = pagecounts.filter(_.contains("this")).cache Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
188 Spark Hands-on Exercises (2/3) Filter the data set pagecounts and return the items that have the word this, and cache in the memory. val lineswiththis = pagecounts.filter(line => line.contains("this")).cache \\ or val lineswiththis = pagecounts.filter(_.contains("this")).cache Find the lines with the most number of words. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
189 Spark Hands-on Exercises (2/3) Filter the data set pagecounts and return the items that have the word this, and cache in the memory. val lineswiththis = pagecounts.filter(line => line.contains("this")).cache \\ or val lineswiththis = pagecounts.filter(_.contains("this")).cache Find the lines with the most number of words. lineswiththis.map(line => line.split(" ").size).reduce((a, b) => if (a > b) a else b) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
190 Spark Hands-on Exercises (2/3) Filter the data set pagecounts and return the items that have the word this, and cache in the memory. val lineswiththis = pagecounts.filter(line => line.contains("this")).cache \\ or val lineswiththis = pagecounts.filter(_.contains("this")).cache Find the lines with the most number of words. lineswiththis.map(line => line.split(" ").size).reduce((a, b) => if (a > b) a else b) Count the total number of words. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
191 Spark Hands-on Exercises (2/3) Filter the data set pagecounts and return the items that have the word this, and cache in the memory. val lineswiththis = pagecounts.filter(line => line.contains("this")).cache \\ or val lineswiththis = pagecounts.filter(_.contains("this")).cache Find the lines with the most number of words. lineswiththis.map(line => line.split(" ").size).reduce((a, b) => if (a > b) a else b) Count the total number of words. val wordcounts = lineswiththis.flatmap(line => line.split(" ")).count \\ or val wordcounts = lineswiththis.flatmap(_.split(" ")).count Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
192 Spark Hands-on Exercises (3/3) Count the number of distinct words. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
193 Spark Hands-on Exercises (3/3) Count the number of distinct words. val uniquewordcounts = lineswiththis.flatmap(_.split(" ")).distinct.count Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
194 Spark Hands-on Exercises (3/3) Count the number of distinct words. val uniquewordcounts = lineswiththis.flatmap(_.split(" ")).distinct.count Count the number of each word. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
195 Spark Hands-on Exercises (3/3) Count the number of distinct words. val uniquewordcounts = lineswiththis.flatmap(_.split(" ")).distinct.count Count the number of each word. val eachwordcounts = lineswiththis.flatmap(_.split(" ")).map(word => (word, 1)).reduceByKey((a, b) => a + b) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
196 Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
197 Motivation Many applications must process large streams of live data and provide results in real-time. Processing information as it flows, without storing them persistently. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
198 Motivation Many applications must process large streams of live data and provide results in real-time. Processing information as it flows, without storing them persistently. Traditional DBMSs: Store and index data before processing it. Process data only when explicitly asked by the users. Both aspects contrast with our requirements. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
199 DBMS vs. DSMS (1/3) DBMS: persistent data where updates are relatively infrequent. DSMS: transient data that is continuously updated. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
200 DBMS vs. DSMS (2/3) DBMS: runs queries just once to return a complete answer. DSMS: executes standing queries, which run continuously and provide updated answers as new data arrives. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
201 DBMS vs. DSMS (3/3) Despite these differences, DSMSs resemble DBMSs: both process incoming data through a sequence of transformations based on SQL operators, e.g., selections, aggregates, joins. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
202 Spark Streaming Run a streaming computation as a series of very small, deterministic batch jobs. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
203 Spark Streaming Run a streaming computation as a series of very small, deterministic batch jobs. Chop up the live stream into batches of X seconds. Spark treats each batch of data as RDDs and processes them using RDD operations. Finally, the processed results of the RDD operations are returned in batches. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
204 DStream DStream: sequence of RDDs representing a stream of data. TCP sockets, Twitter, HDFS, Kafka,... Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
205 DStream DStream: sequence of RDDs representing a stream of data. TCP sockets, Twitter, HDFS, Kafka,... Initializing Spark streaming val scc = new StreamingContext(master, appname, batchduration, [sparkhome], [jars]) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
206 DStream Operations (1/2) Transformations: modify data from on DStream to a new DStream. Standard RDD operations (stateless/stateful operations): map, join,... Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
207 DStream Operations (1/2) Transformations: modify data from on DStream to a new DStream. Standard RDD operations (stateless/stateful operations): map, join,... Window operations: group all the records from a sliding window of the past time intervals into one RDD: window, reducebyandwindow,... Window length: the duration of the window. Slide interval: the interval at which the operation is performed. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
208 DStream Operations (2/2) Output operations: send data to external entity saveashadoopfiles, foreach, print,... Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
209 DStream Operations (2/2) Output operations: send data to external entity saveashadoopfiles, foreach, print,... Attaching input sources ssc.textfilestream(directory) ssc.socketstream(hostname, port) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
210 Example 1 (1/3) Get hash-tags from Twitter. val ssc = new StreamingContext("local[2]", "Tweets", Seconds(1)) val tweets = TwitterUtils.createStream(ssc, None) DStream: a sequence of RDD representing a stream of data Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
211 Example 1 (2/3) Get hash-tags from Twitter. val ssc = new StreamingContext("local[2]", "Tweets", Seconds(1)) val tweets = TwitterUtils.createStream(ssc, None) val hashtags = tweets.flatmap(status => gettags(status)) transformation: modify data in one DStream to create another DStream Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
212 Example 1 (3/3) Get hash-tags from Twitter. val ssc = new StreamingContext("local[2]", "Tweets", Seconds(1)) val tweets = TwitterUtils.createStream(ssc, None) val hashtags = tweets.flatmap(status => gettags(status)) hashtags.saveashadoopfiles("hdfs://...") Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
213 Example 2 Count frequency of words received every second. val ssc = new StreamingContext("local[2]", "NetworkWordCount", Seconds(1)) val lines = ssc.sockettextstream(ip, port) val words = lines.flatmap(_.split(" ")) val ones = words.map(x => (x, 1)) val freqs = ones.reducebykey(_ + _) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
214 Example 3 Count frequency of words received in last minute. val ssc = new StreamingContext("local[2]", "NetworkWordCount", Seconds(1)) val lines = ssc.sockettextstream(ip, port) val words = lines.flatmap(_.split(" ")) val ones = words.map(x => (x, 1)) val freqs = ones.reducebykey(_ + _) val freqs_60s = freqs.window(seconds(60), Second(1)).reduceByKey(_ + _) window length window movement Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
215 Spark Streaming Hands-on Exercises (1/2) Stream data through a TCP connection and port 9999 nc -lk 9999 Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
216 Spark Streaming Hands-on Exercises (1/2) Stream data through a TCP connection and port 9999 nc -lk 9999 import the streaming libraries Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
217 Spark Streaming Hands-on Exercises (1/2) Stream data through a TCP connection and port 9999 nc -lk 9999 import the streaming libraries import org.apache.spark.streaming.{seconds, StreamingContext} import org.apache.spark.streaming.streamingcontext._ Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
218 Spark Streaming Hands-on Exercises (1/2) Stream data through a TCP connection and port 9999 nc -lk 9999 import the streaming libraries import org.apache.spark.streaming.{seconds, StreamingContext} import org.apache.spark.streaming.streamingcontext._ Print out the incoming stream every five seconds at port 9999 Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
219 Spark Streaming Hands-on Exercises (1/2) Stream data through a TCP connection and port 9999 nc -lk 9999 import the streaming libraries import org.apache.spark.streaming.{seconds, StreamingContext} import org.apache.spark.streaming.streamingcontext._ Print out the incoming stream every five seconds at port 9999 val ssc = new StreamingContext("local[2]", "NetworkWordCount", Seconds(5)) val lines = ssc.sockettextstream(" ", 9999) lines.print() Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
220 Spark Streaming Hands-on Exercises (1/2) Count the number of each word in the incoming stream every five seconds at port 9999 Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
221 Spark Streaming Hands-on Exercises (1/2) Count the number of each word in the incoming stream every five seconds at port 9999 import org.apache.spark.streaming.{seconds, StreamingContext} import org.apache.spark.streaming.streamingcontext._ val ssc = new StreamingContext("local[2]", "NetworkWordCount", Seconds(1)) val lines = ssc.sockettextstream(" ", 9999) val words = lines.flatmap(_.split(" ")) val pairs = words.map(x => (x, 1)) val wordcounts = pairs.reducebykey(_ + _) wordcounts.print() Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
222 Spark Streaming Hands-on Exercises (2/2) Extend the code to generate word count over last 30 seconds of data, and repeat the computation every 10 seconds Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
223 Spark Streaming Hands-on Exercises (2/2) Extend the code to generate word count over last 30 seconds of data, and repeat the computation every 10 seconds import org.apache.spark.streaming.{seconds, StreamingContext} import org.apache.spark.streaming.streamingcontext._ val ssc = new StreamingContext("local[2]", "NetworkWordCount", Seconds(5)) val lines = ssc.sockettextstream(" ", 9999) val words = lines.flatmap(_.split(" ")) val pairs = words.map(word => (word, 1)) val windowedwordcounts = pairs.reducebykeyandwindow(_ + _, _ - _, Seconds(30), Seconds(10)) windowedwordcounts.print() wordcounts.print() Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
224 Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
225 Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
226 Introduction Graphs provide a flexible abstraction for describing relationships between discrete objects. Many problems can be modeled by graphs and solved with appropriate graph algorithms. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
227 Large Graph Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
228 Large-Scale Graph Processing Large graphs need large-scale processing. A large graph either cannot fit into memory of single computer or it fits with huge cost. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
229 Question Can we use platforms like MapReduce or Spark, which are based on data-parallel model, for large-scale graph proceeding? Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
230 Data-Parallel Model for Large-Scale Graph Processing The platforms that have worked well for developing parallel applications are not necessarily effective for large-scale graph problems. Why? Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
231 Graph Algorithms Characteristics (1/2) Unstructured problems Difficult to extract parallelism based on partitioning of the data: the irregular structure of graphs. Limited scalability: unbalanced computational loads resulting from poorly partitioned data. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
232 Graph Algorithms Characteristics (1/2) Unstructured problems Difficult to extract parallelism based on partitioning of the data: the irregular structure of graphs. Limited scalability: unbalanced computational loads resulting from poorly partitioned data. Data-driven computations Difficult to express parallelism based on partitioning of computation: the structure of computations in the algorithm is not known a priori. The computations are dictated by nodes and links of the graph. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
233 Graph Algorithms Characteristics (2/2) Poor data locality The computations and data access patterns do not have much locality: the irregular structure of graphs. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
234 Graph Algorithms Characteristics (2/2) Poor data locality The computations and data access patterns do not have much locality: the irregular structure of graphs. High data access to computation ratio Graph algorithms are often based on exploring the structure of a graph to perform computations on the graph data. Runtime can be dominated by waiting memory fetches: low locality. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
235 Proposed Solution Graph-Parallel Processing Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
236 Proposed Solution Graph-Parallel Processing Computation typically depends on the neighbors. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
237 Graph-Parallel Processing Restricts the types of computation. New techniques to partition and distribute graphs. Exploit graph structure. Executes graph algorithms orders-of-magnitude faster than more general data-parallel systems. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
238 Data-Parallel vs. Graph-Parallel Computation Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
239 Data-Parallel vs. Graph-Parallel Computation Data-parallel computation Record-centric view of data. Parallelism: processing independent data on separate resources. Graph-parallel computation Vertex-centric view of graphs. Parallelism: partitioning graph (dependent) data across processing resources, and resolving dependencies (along edges) through iterative computation and communication. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
240 Graph-Parallel Computation Frameworks Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
241 Data-Parallel vs. Graph-Parallel Computation Graph-parallel computation: restricting the types of computation to achieve performance. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
242 Data-Parallel vs. Graph-Parallel Computation Graph-parallel computation: restricting the types of computation to achieve performance. But, the same restrictions make it difficult and inefficient to express many stages in a typical graph-analytics pipeline. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
243 Data-Parallel and Graph-Parallel Pipeline Moving between table and graph views of the same physical data. Inefficient: extensive data movement and duplication across the network and file system. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
244 GraphX vs. Data-Parallel/Graph-Parallel Systems Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
245 GraphX vs. Data-Parallel/Graph-Parallel Systems Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
246 GraphX New API that blurs the distinction between Tables and Graphs. New system that unifies Data-Parallel and Graph-Parallel systems. It is implemented on top of Spark. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
247 Unifying Data-Parallel and Graph-Parallel Analytics Tables and Graphs are composable views of the same physical data. Each view has its own operators that exploit the semantics of the view to achieve efficient execution. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
248 Data Model Property Graph: represented using two Spark RDDs: Edge collection: VertexRDD Vertex collection: EdgeRDD // VD: the type of the vertex attribute // ED: the type of the edge attribute class Graph[VD, ED] { val vertices: VertexRDD[VD] val edges: EdgeRDD[ED, VD] } Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
249 Primitive Data Types // Vertex collection class VertexRDD[VD] extends RDD[(VertexId, VD)] // Edge collection class EdgeRDD[ED] extends RDD[Edge[ED]] case class Edge[ED, VD](srcId: VertexId = 0, dstid: VertexId = 0, attr: ED = null.asinstanceof[ed]) // Edge Triple class EdgeTriplet[VD, ED] extends Edge[ED] EdgeTriplet represents an edge along with the vertex attributes of its neighboring vertices. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
250 Example (1/3) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
251 Example (2/3) val sc: SparkContext // Create an RDD for the vertices val users: RDD[(Long, (String, String))] = sc.parallelize( Array((3L, ("rxin", "student")), (7L, ("jgonzal", "postdoc")), (5L, ("franklin", "prof")), (2L, ("istoica", "prof")))) // Create an RDD for edges val relationships: RDD[Edge[String]] = sc.parallelize( Array(Edge(3L, 7L, "collab"), Edge(5L, 3L, "advisor"), Edge(2L, 5L, "colleague"), Edge(5L, 7L, "pi"))) // Define a default user in case there are relationship with missing user val defaultuser = ("John Doe", "Missing") // Build the initial Graph val usergraph: Graph[(String, String), String] = Graph(users, relationships, defaultuser) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
252 Example (3/3) // Constructed from above val usergraph: Graph[(String, String), String] // Count all users which are postdocs usergraph.vertices.filter { case (id, (name, pos)) => pos == "postdoc" }.count // Count all the edges where src > dst usergraph.edges.filter(e => e.srcid > e.dstid).count // Use the triplets view to create an RDD of facts val facts: RDD[String] = graph.triplets.map(triplet => triplet.srcattr._1 + " is the " + triplet.attr + " of " + triplet.dstattr._1) // Remove missing vertices as well as the edges to connected to them val validgraph = graph.subgraph(vpred = (id, attr) => attr._2!= "Missing") facts.collect.foreach(println) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
253 Property Operators (1/2) class Graph[VD, ED] { def mapvertices[vd2](map: (VertexId, VD) => VD2): Graph[VD2, ED] def mapedges[ed2](map: Edge[ED] => ED2): Graph[VD, ED2] } def maptriplets[ed2](map: EdgeTriplet[VD, ED] => ED2): Graph[VD, ED2] They yield new graphs with the vertex or edge properties modified by the map function. The graph structure is unaffected. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
254 Property Operators (2/2) val newgraph = graph.mapvertices((id, attr) => mapudf(id, attr)) val newvertices = graph.vertices.map((id, attr) => (id, mapudf(id, attr))) val newgraph = Graph(newVertices, graph.edges) Both are logically equivalent, but the second one does not preserve the structural indices and would not benefit from the GraphX system optimizations. Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
255 Map Reduce Triplets Map-Reduce for each vertex Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
256 Map Reduce Triplets Map-Reduce for each vertex // what is the age of the oldest follower for each user? val oldestfollowerage = graph.mapreducetriplets( e => Iterator((e.dstAttr, e.srcattr)), // Map (a, b) => max(a, b) // Reduce ).vertices Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
257 Structural Operators class Graph[VD, ED] { // returns a new graph with all the edge directions reversed def reverse: Graph[VD, ED] // returns the graph containing only the vertices and edges that satisfy // the vertex predicate def subgraph(epred: EdgeTriplet[VD,ED] => Boolean, vpred: (VertexId, VD) => Boolean): Graph[VD, ED] // a subgraph by returning a graph that contains the vertices and edges // that are also found in the input graph def mask[vd2, ED2](other: Graph[VD2, ED2]): Graph[VD, ED] } Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
258 Structural Operators Example // Build the initial Graph val graph = Graph(users, relationships, defaultuser) // Run Connected Components val ccgraph = graph.connectedcomponents() // Remove missing vertices as well as the edges to connected to them val validgraph = graph.subgraph(vpred = (id, attr) => attr._2!= "Missing") // Restrict the answer to the valid subgraph val validccgraph = ccgraph.mask(validgraph) Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
259 Questions? Amir H. Payberah (Tehran Polytechnic) Spark 1393/10/ / 171
Distributed Filesystems
Distributed Filesystems Amir H. Payberah Swedish Institute of Computer Science [email protected] April 8, 2014 Amir H. Payberah (SICS) Distributed Filesystems April 8, 2014 1 / 32 What is Filesystem? Controls
The Spark Big Data Analytics Platform
The Spark Big Data Analytics Platform Amir H. Payberah Swedish Institute of Computer Science [email protected] June 17, 2014 Amir H. Payberah (SICS) Spark June 17, 2014 1 / 125 Amir H. Payberah (SICS) Spark
The Stratosphere Big Data Analytics Platform
The Stratosphere Big Data Analytics Platform Amir H. Payberah Swedish Institute of Computer Science [email protected] June 4, 2014 Amir H. Payberah (SICS) Stratosphere June 4, 2014 1 / 44 Big Data small data
Introduction to MapReduce and Hadoop
Introduction to MapReduce and Hadoop Jie Tao Karlsruhe Institute of Technology [email protected] Die Kooperation von Why Map/Reduce? Massive data Can not be stored on a single machine Takes too long to process
Big Data With Hadoop
With Saurabh Singh [email protected] The Ohio State University February 11, 2016 Overview 1 2 3 Requirements Ecosystem Resilient Distributed Datasets (RDDs) Example Code vs Mapreduce 4 5 Source: [Tutorials
BIG DATA APPLICATIONS
BIG DATA ANALYTICS USING HADOOP AND SPARK ON HATHI Boyu Zhang Research Computing ITaP BIG DATA APPLICATIONS Big data has become one of the most important aspects in scientific computing and business analytics
Spark ΕΡΓΑΣΤΗΡΙΟ 10. Prepared by George Nikolaides 4/19/2015 1
Spark ΕΡΓΑΣΤΗΡΙΟ 10 Prepared by George Nikolaides 4/19/2015 1 Introduction to Apache Spark Another cluster computing framework Developed in the AMPLab at UC Berkeley Started in 2009 Open-sourced in 2010
Hadoop/MapReduce. Object-oriented framework presentation CSCI 5448 Casey McTaggart
Hadoop/MapReduce Object-oriented framework presentation CSCI 5448 Casey McTaggart What is Apache Hadoop? Large scale, open source software framework Yahoo! has been the largest contributor to date Dedicated
Big Data Analytics with MapReduce VL Implementierung von Datenbanksystemen 05-Feb-13
Big Data Analytics with MapReduce VL Implementierung von Datenbanksystemen 05-Feb-13 Astrid Rheinländer Wissensmanagement in der Bioinformatik What is Big Data? collection of data sets so large and complex
Xiaoming Gao Hui Li Thilina Gunarathne
Xiaoming Gao Hui Li Thilina Gunarathne Outline HBase and Bigtable Storage HBase Use Cases HBase vs RDBMS Hands-on: Load CSV file to Hbase table with MapReduce Motivation Lots of Semi structured data Horizontal
Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data
Introduction to Hadoop HDFS and Ecosystems ANSHUL MITTAL Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data Topics The goal of this presentation is to give
Hadoop Ecosystem B Y R A H I M A.
Hadoop Ecosystem B Y R A H I M A. History of Hadoop Hadoop was created by Doug Cutting, the creator of Apache Lucene, the widely used text search library. Hadoop has its origins in Apache Nutch, an open
Hadoop Distributed File System (HDFS) Overview
2012 coreservlets.com and Dima May Hadoop Distributed File System (HDFS) Overview Originals of slides and source code for examples: http://www.coreservlets.com/hadoop-tutorial/ Also see the customized
Big Data Management and NoSQL Databases
NDBI040 Big Data Management and NoSQL Databases Lecture 3. Apache Hadoop Doc. RNDr. Irena Holubova, Ph.D. [email protected] http://www.ksi.mff.cuni.cz/~holubova/ndbi040/ Apache Hadoop Open-source
The Hadoop Eco System Shanghai Data Science Meetup
The Hadoop Eco System Shanghai Data Science Meetup Karthik Rajasethupathy, Christian Kuka 03.11.2015 @Agora Space Overview What is this talk about? Giving an overview of the Hadoop Ecosystem and related
Hadoop IST 734 SS CHUNG
Hadoop IST 734 SS CHUNG Introduction What is Big Data?? Bulk Amount Unstructured Lots of Applications which need to handle huge amount of data (in terms of 500+ TB per day) If a regular machine need to
Word Count Code using MR2 Classes and API
EDUREKA Word Count Code using MR2 Classes and API A Guide to Understand the Execution of Word Count edureka! A guide to understand the execution and flow of word count WRITE YOU FIRST MRV2 PROGRAM AND
Tutorial: Big Data Algorithms and Applications Under Hadoop KUNPENG ZHANG SIDDHARTHA BHATTACHARYYA
Tutorial: Big Data Algorithms and Applications Under Hadoop KUNPENG ZHANG SIDDHARTHA BHATTACHARYYA http://kzhang6.people.uic.edu/tutorial/amcis2014.html August 7, 2014 Schedule I. Introduction to big data
Introduction to Hadoop. New York Oracle User Group Vikas Sawhney
Introduction to Hadoop New York Oracle User Group Vikas Sawhney GENERAL AGENDA Driving Factors behind BIG-DATA NOSQL Database 2014 Database Landscape Hadoop Architecture Map/Reduce Hadoop Eco-system Hadoop
CSE-E5430 Scalable Cloud Computing Lecture 2
CSE-E5430 Scalable Cloud Computing Lecture 2 Keijo Heljanko Department of Computer Science School of Science Aalto University [email protected] 14.9-2015 1/36 Google MapReduce A scalable batch processing
Parallel Programming Map-Reduce. Needless to Say, We Need Machine Learning for Big Data
Case Study 2: Document Retrieval Parallel Programming Map-Reduce Machine Learning/Statistics for Big Data CSE599C1/STAT592, University of Washington Carlos Guestrin January 31 st, 2013 Carlos Guestrin
Introduc)on to the MapReduce Paradigm and Apache Hadoop. Sriram Krishnan [email protected]
Introduc)on to the MapReduce Paradigm and Apache Hadoop Sriram Krishnan [email protected] Programming Model The computa)on takes a set of input key/ value pairs, and Produces a set of output key/value pairs.
and HDFS for Big Data Applications Serge Blazhievsky Nice Systems
Introduction PRESENTATION to Hadoop, TITLE GOES MapReduce HERE and HDFS for Big Data Applications Serge Blazhievsky Nice Systems SNIA Legal Notice The material contained in this tutorial is copyrighted
Prepared By : Manoj Kumar Joshi & Vikas Sawhney
Prepared By : Manoj Kumar Joshi & Vikas Sawhney General Agenda Introduction to Hadoop Architecture Acknowledgement Thanks to all the authors who left their selfexplanatory images on the internet. Thanks
Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh
1 Hadoop: A Framework for Data- Intensive Distributed Computing CS561-Spring 2012 WPI, Mohamed Y. Eltabakh 2 What is Hadoop? Hadoop is a software framework for distributed processing of large datasets
HADOOP MOCK TEST HADOOP MOCK TEST II
http://www.tutorialspoint.com HADOOP MOCK TEST Copyright tutorialspoint.com This section presents you various set of Mock Tests related to Hadoop Framework. You can download these sample mock tests at
Lambda Architecture. CSCI 5828: Foundations of Software Engineering Lecture 29 12/09/2014
Lambda Architecture CSCI 5828: Foundations of Software Engineering Lecture 29 12/09/2014 1 Goals Cover the material in Chapter 8 of the Concurrency Textbook The Lambda Architecture Batch Layer MapReduce
Internals of Hadoop Application Framework and Distributed File System
International Journal of Scientific and Research Publications, Volume 5, Issue 7, July 2015 1 Internals of Hadoop Application Framework and Distributed File System Saminath.V, Sangeetha.M.S Abstract- Hadoop
CS54100: Database Systems
CS54100: Database Systems Cloud Databases: The Next Post- Relational World 18 April 2012 Prof. Chris Clifton Beyond RDBMS The Relational Model is too limiting! Simple data model doesn t capture semantics
Processing of massive data: MapReduce. 2. Hadoop. New Trends In Distributed Systems MSc Software and Systems
Processing of massive data: MapReduce 2. Hadoop 1 MapReduce Implementations Google were the first that applied MapReduce for big data analysis Their idea was introduced in their seminal paper MapReduce:
map/reduce connected components
1, map/reduce connected components find connected components with analogous algorithm: map edges randomly to partitions (k subgraphs of n nodes) for each partition remove edges, so that only tree remains
Apache Flink Next-gen data analysis. Kostas Tzoumas [email protected] @kostas_tzoumas
Apache Flink Next-gen data analysis Kostas Tzoumas [email protected] @kostas_tzoumas What is Flink Project undergoing incubation in the Apache Software Foundation Originating from the Stratosphere research
BigData. An Overview of Several Approaches. David Mera 16/12/2013. Masaryk University Brno, Czech Republic
BigData An Overview of Several Approaches David Mera Masaryk University Brno, Czech Republic 16/12/2013 Table of Contents 1 Introduction 2 Terminology 3 Approaches focused on batch data processing MapReduce-Hadoop
Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware
Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware Created by Doug Cutting and Mike Carafella in 2005. Cutting named the program after
Hadoop and ecosystem * 本 文 中 的 言 论 仅 代 表 作 者 个 人 观 点 * 本 文 中 的 一 些 图 例 来 自 于 互 联 网. Information Management. Information Management IBM CDL Lab
IBM CDL Lab Hadoop and ecosystem * 本 文 中 的 言 论 仅 代 表 作 者 个 人 观 点 * 本 文 中 的 一 些 图 例 来 自 于 互 联 网 Information Management 2012 IBM Corporation Agenda Hadoop 技 术 Hadoop 概 述 Hadoop 1.x Hadoop 2.x Hadoop 生 态
Hadoop WordCount Explained! IT332 Distributed Systems
Hadoop WordCount Explained! IT332 Distributed Systems Typical problem solved by MapReduce Read a lot of data Map: extract something you care about from each record Shuffle and Sort Reduce: aggregate, summarize,
Streaming items through a cluster with Spark Streaming
Streaming items through a cluster with Spark Streaming Tathagata TD Das @tathadas CME 323: Distributed Algorithms and Optimization Stanford, May 6, 2015 Who am I? > Project Management Committee (PMC) member
Apache Hadoop. Alexandru Costan
1 Apache Hadoop Alexandru Costan Big Data Landscape No one-size-fits-all solution: SQL, NoSQL, MapReduce, No standard, except Hadoop 2 Outline What is Hadoop? Who uses it? Architecture HDFS MapReduce Open
Chase Wu New Jersey Ins0tute of Technology
CS 698: Special Topics in Big Data Chapter 4. Big Data Analytics Platforms Chase Wu New Jersey Ins0tute of Technology Some of the slides have been provided through the courtesy of Dr. Ching-Yung Lin at
Unified Big Data Analytics Pipeline. 连 城 [email protected]
Unified Big Data Analytics Pipeline 连 城 [email protected] What is A fast and general engine for large-scale data processing An open source implementation of Resilient Distributed Datasets (RDD) Has an
Hadoop at Yahoo! Owen O Malley Yahoo!, Grid Team [email protected]
Hadoop at Yahoo! Owen O Malley Yahoo!, Grid Team [email protected] Who Am I? Yahoo! Architect on Hadoop Map/Reduce Design, review, and implement features in Hadoop Working on Hadoop full time since Feb
Lecture 32 Big Data. 1. Big Data problem 2. Why the excitement about big data 3. What is MapReduce 4. What is Hadoop 5. Get started with Hadoop
Lecture 32 Big Data 1. Big Data problem 2. Why the excitement about big data 3. What is MapReduce 4. What is Hadoop 5. Get started with Hadoop 1 2 Big Data Problems Data explosion Data from users on social
CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop)
CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop) Rezaul A. Chowdhury Department of Computer Science SUNY Stony Brook Spring 2016 MapReduce MapReduce is a programming model
Getting to know Apache Hadoop
Getting to know Apache Hadoop Oana Denisa Balalau Télécom ParisTech October 13, 2015 1 / 32 Table of Contents 1 Apache Hadoop 2 The Hadoop Distributed File System(HDFS) 3 Application management in the
Lecture 5: GFS & HDFS! Claudia Hauff (Web Information Systems)! [email protected]
Big Data Processing, 2014/15 Lecture 5: GFS & HDFS!! Claudia Hauff (Web Information Systems)! [email protected] 1 Course content Introduction Data streams 1 & 2 The MapReduce paradigm Looking behind
Challenges for Data Driven Systems
Challenges for Data Driven Systems Eiko Yoneki University of Cambridge Computer Laboratory Quick History of Data Management 4000 B C Manual recording From tablets to papyrus to paper A. Payberah 2014 2
Hadoop Configuration and First Examples
Hadoop Configuration and First Examples Big Data 2015 Hadoop Configuration In the bash_profile export all needed environment variables Hadoop Configuration Allow remote login Hadoop Configuration Download
HADOOP MOCK TEST HADOOP MOCK TEST I
http://www.tutorialspoint.com HADOOP MOCK TEST Copyright tutorialspoint.com This section presents you various set of Mock Tests related to Hadoop Framework. You can download these sample mock tests at
Data-Intensive Computing with Map-Reduce and Hadoop
Data-Intensive Computing with Map-Reduce and Hadoop Shamil Humbetov Department of Computer Engineering Qafqaz University Baku, Azerbaijan [email protected] Abstract Every day, we create 2.5 quintillion
Hadoop Lab Notes. Nicola Tonellotto November 15, 2010
Hadoop Lab Notes Nicola Tonellotto November 15, 2010 2 Contents 1 Hadoop Setup 4 1.1 Prerequisites........................................... 4 1.2 Installation............................................
Mrs: MapReduce for Scientific Computing in Python
Mrs: for Scientific Computing in Python Andrew McNabb, Jeff Lund, and Kevin Seppi Brigham Young University November 16, 2012 Large scale problems require parallel processing Communication in parallel processing
Jeffrey D. Ullman slides. MapReduce for data intensive computing
Jeffrey D. Ullman slides MapReduce for data intensive computing Single-node architecture CPU Machine Learning, Statistics Memory Classical Data Mining Disk Commodity Clusters Web data sets can be very
A BigData Tour HDFS, Ceph and MapReduce
A BigData Tour HDFS, Ceph and MapReduce These slides are possible thanks to these sources Jonathan Drusi - SCInet Toronto Hadoop Tutorial, Amir Payberah - Course in Data Intensive Computing SICS; Yahoo!
!"#$%&' ( )%#*'+,'-#.//"0( !"#$"%&'()*$+()',!-+.'/', 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 3, Processing LARGE data sets
!"#$%&' ( Processing LARGE data sets )%#*'+,'-#.//"0( Framework for o! reliable o! scalable o! distributed computation of large data sets 4(5,67,!-+!"89,:*$;'0+$.
Spark in Action. Fast Big Data Analytics using Scala. Matei Zaharia. www.spark- project.org. University of California, Berkeley UC BERKELEY
Spark in Action Fast Big Data Analytics using Scala Matei Zaharia University of California, Berkeley www.spark- project.org UC BERKELEY My Background Grad student in the AMP Lab at UC Berkeley» 50- person
5 HDFS - Hadoop Distributed System
5 HDFS - Hadoop Distributed System 5.1 Definition and Remarks HDFS is a file system designed for storing very large files with streaming data access patterns running on clusters of commoditive hardware.
Programming Hadoop 5-day, instructor-led BD-106. MapReduce Overview. Hadoop Overview
Programming Hadoop 5-day, instructor-led BD-106 MapReduce Overview The Client Server Processing Pattern Distributed Computing Challenges MapReduce Defined Google's MapReduce The Map Phase of MapReduce
Overview. Big Data in Apache Hadoop. - HDFS - MapReduce in Hadoop - YARN. https://hadoop.apache.org. Big Data Management and Analytics
Overview Big Data in Apache Hadoop - HDFS - MapReduce in Hadoop - YARN https://hadoop.apache.org 138 Apache Hadoop - Historical Background - 2003: Google publishes its cluster architecture & DFS (GFS)
Extreme Computing. Hadoop MapReduce in more detail. www.inf.ed.ac.uk
Extreme Computing Hadoop MapReduce in more detail How will I actually learn Hadoop? This class session Hadoop: The Definitive Guide RTFM There is a lot of material out there There is also a lot of useless
The Flink Big Data Analytics Platform. Marton Balassi, Gyula Fora" {mbalassi, gyfora}@apache.org
The Flink Big Data Analytics Platform Marton Balassi, Gyula Fora" {mbalassi, gyfora}@apache.org What is Apache Flink? Open Source Started in 2009 by the Berlin-based database research groups In the Apache
Tutorial- Counting Words in File(s) using MapReduce
Tutorial- Counting Words in File(s) using MapReduce 1 Overview This document serves as a tutorial to setup and run a simple application in Hadoop MapReduce framework. A job in Hadoop MapReduce usually
COSC 6397 Big Data Analytics. Distributed File Systems (II) Edgar Gabriel Spring 2014. HDFS Basics
COSC 6397 Big Data Analytics Distributed File Systems (II) Edgar Gabriel Spring 2014 HDFS Basics An open-source implementation of Google File System Assume that node failure rate is high Assumes a small
Parallel Processing of cluster by Map Reduce
Parallel Processing of cluster by Map Reduce Abstract Madhavi Vaidya, Department of Computer Science Vivekanand College, Chembur, Mumbai [email protected] MapReduce is a parallel programming model
MASSIVE DATA PROCESSING (THE GOOGLE WAY ) 27/04/2015. Fundamentals of Distributed Systems. Inside Google circa 2015
7/04/05 Fundamentals of Distributed Systems CC5- PROCESAMIENTO MASIVO DE DATOS OTOÑO 05 Lecture 4: DFS & MapReduce I Aidan Hogan [email protected] Inside Google circa 997/98 MASSIVE DATA PROCESSING (THE
Lecture 10 - Functional programming: Hadoop and MapReduce
Lecture 10 - Functional programming: Hadoop and MapReduce Sohan Dharmaraja Sohan Dharmaraja Lecture 10 - Functional programming: Hadoop and MapReduce 1 / 41 For today Big Data and Text analytics Functional
Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee [email protected] June 3 rd, 2008
Hadoop Distributed File System Dhruba Borthakur Apache Hadoop Project Management Committee [email protected] June 3 rd, 2008 Who Am I? Hadoop Developer Core contributor since Hadoop s infancy Focussed
Scaling Out With Apache Spark. DTL Meeting 17-04-2015 Slides based on https://www.sics.se/~amir/files/download/dic/spark.pdf
Scaling Out With Apache Spark DTL Meeting 17-04-2015 Slides based on https://www.sics.se/~amir/files/download/dic/spark.pdf Your hosts Mathijs Kattenberg Technical consultant Jeroen Schot Technical consultant
Cloud Computing at Google. Architecture
Cloud Computing at Google Google File System Web Systems and Algorithms Google Chris Brooks Department of Computer Science University of San Francisco Google has developed a layered system to handle webscale
Istanbul Şehir University Big Data Camp 14. Hadoop Map Reduce. Aslan Bakirov Kevser Nur Çoğalmış
Istanbul Şehir University Big Data Camp 14 Hadoop Map Reduce Aslan Bakirov Kevser Nur Çoğalmış Agenda Map Reduce Concepts System Overview Hadoop MR Hadoop MR Internal Job Execution Workflow Map Side Details
Hadoop: Understanding the Big Data Processing Method
Hadoop: Understanding the Big Data Processing Method Deepak Chandra Upreti 1, Pawan Sharma 2, Dr. Yaduvir Singh 3 1 PG Student, Department of Computer Science & Engineering, Ideal Institute of Technology
Convex Optimization for Big Data: Lecture 2: Frameworks for Big Data Analytics
Convex Optimization for Big Data: Lecture 2: Frameworks for Big Data Analytics Sabeur Aridhi Aalto University, Finland Sabeur Aridhi Frameworks for Big Data Analytics 1 / 59 Introduction Contents 1 Introduction
MapReduce with Apache Hadoop Analysing Big Data
MapReduce with Apache Hadoop Analysing Big Data April 2010 Gavin Heavyside [email protected] About Journey Dynamics Founded in 2006 to develop software technology to address the issues
Hadoop and Map-Reduce. Swati Gore
Hadoop and Map-Reduce Swati Gore Contents Why Hadoop? Hadoop Overview Hadoop Architecture Working Description Fault Tolerance Limitations Why Map-Reduce not MPI Distributed sort Why Hadoop? Existing Data
Hadoop Architecture. Part 1
Hadoop Architecture Part 1 Node, Rack and Cluster: A node is simply a computer, typically non-enterprise, commodity hardware for nodes that contain data. Consider we have Node 1.Then we can add more nodes,
Hadoop MapReduce and Spark. Giorgio Pedrazzi, CINECA-SCAI School of Data Analytics and Visualisation Milan, 10/06/2015
Hadoop MapReduce and Spark Giorgio Pedrazzi, CINECA-SCAI School of Data Analytics and Visualisation Milan, 10/06/2015 Outline Hadoop Hadoop Import data on Hadoop Spark Spark features Scala MLlib MLlib
Data Science in the Wild
Data Science in the Wild Lecture 3 Some slides are taken from J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 1 Data Science and Big Data Big Data: the data cannot
International Journal of Advancements in Research & Technology, Volume 3, Issue 2, February-2014 10 ISSN 2278-7763
International Journal of Advancements in Research & Technology, Volume 3, Issue 2, February-2014 10 A Discussion on Testing Hadoop Applications Sevuga Perumal Chidambaram ABSTRACT The purpose of analysing
Architectures for massive data management
Architectures for massive data management Apache Spark Albert Bifet [email protected] October 20, 2015 Spark Motivation Apache Spark Figure: IBM and Apache Spark What is Apache Spark Apache
MapReduce. MapReduce and SQL Injections. CS 3200 Final Lecture. Introduction. MapReduce. Programming Model. Example
MapReduce MapReduce and SQL Injections CS 3200 Final Lecture Jeffrey Dean and Sanjay Ghemawat. MapReduce: Simplified Data Processing on Large Clusters. OSDI'04: Sixth Symposium on Operating System Design
Distributed File Systems
Distributed File Systems Paul Krzyzanowski Rutgers University October 28, 2012 1 Introduction The classic network file systems we examined, NFS, CIFS, AFS, Coda, were designed as client-server applications.
HDFS. Hadoop Distributed File System
HDFS Kevin Swingler Hadoop Distributed File System File system designed to store VERY large files Streaming data access Running across clusters of commodity hardware Resilient to node failure 1 Large files
Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms
Distributed File System 1 How do we get data to the workers? NAS Compute Nodes SAN 2 Distributed File System Don t move data to workers move workers to the data! Store data on the local disks of nodes
Hadoop Distributed File System (HDFS)
1 Hadoop Distributed File System (HDFS) Thomas Kiencke Institute of Telematics, University of Lübeck, Germany Abstract The Internet has become an important part in our life. As a consequence, companies
How To Use Hadoop
Hadoop in Action Justin Quan March 15, 2011 Poll What s to come Overview of Hadoop for the uninitiated How does Hadoop work? How do I use Hadoop? How do I get started? Final Thoughts Key Take Aways Hadoop
MapReduce and Hadoop. Aaron Birkland Cornell Center for Advanced Computing. January 2012
MapReduce and Hadoop Aaron Birkland Cornell Center for Advanced Computing January 2012 Motivation Simple programming model for Big Data Distributed, parallel but hides this Established success at petabyte
Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee [email protected] [email protected]
Hadoop Distributed File System Dhruba Borthakur Apache Hadoop Project Management Committee [email protected] [email protected] Hadoop, Why? Need to process huge datasets on large clusters of computers
Map Reduce & Hadoop Recommended Text:
Big Data Map Reduce & Hadoop Recommended Text:! Large datasets are becoming more common The New York Stock Exchange generates about one terabyte of new trade data per day. Facebook hosts approximately
Data-Intensive Programming. Timo Aaltonen Department of Pervasive Computing
Data-Intensive Programming Timo Aaltonen Department of Pervasive Computing Data-Intensive Programming Lecturer: Timo Aaltonen University Lecturer [email protected] Assistants: Henri Terho and Antti
Big Data and Apache Hadoop s MapReduce
Big Data and Apache Hadoop s MapReduce Michael Hahsler Computer Science and Engineering Southern Methodist University January 23, 2012 Michael Hahsler (SMU/CSE) Hadoop/MapReduce January 23, 2012 1 / 23
Hadoop Framework. technology basics for data scientists. Spring - 2014. Jordi Torres, UPC - BSC www.jorditorres.eu @JordiTorresBCN
Hadoop Framework technology basics for data scientists Spring - 2014 Jordi Torres, UPC - BSC www.jorditorres.eu @JordiTorresBCN Warning! Slides are only for presenta8on guide We will discuss+debate addi8onal
Big Data Frameworks: Scala and Spark Tutorial
Big Data Frameworks: Scala and Spark Tutorial 13.03.2015 Eemil Lagerspetz, Ella Peltonen Professor Sasu Tarkoma These slides: http://is.gd/bigdatascala www.cs.helsinki.fi Functional Programming Functional
CS2510 Computer Operating Systems
CS2510 Computer Operating Systems HADOOP Distributed File System Dr. Taieb Znati Computer Science Department University of Pittsburgh Outline HDF Design Issues HDFS Application Profile Block Abstraction
CS2510 Computer Operating Systems
CS2510 Computer Operating Systems HADOOP Distributed File System Dr. Taieb Znati Computer Science Department University of Pittsburgh Outline HDF Design Issues HDFS Application Profile Block Abstraction
Big Data 2012 Hadoop Tutorial
Big Data 2012 Hadoop Tutorial Oct 19th, 2012 Martin Kaufmann Systems Group, ETH Zürich 1 Contact Exercise Session Friday 14.15 to 15.00 CHN D 46 Your Assistant Martin Kaufmann Office: CAB E 77.2 E-Mail:
INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE
INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE AGENDA Introduction to Big Data Introduction to Hadoop HDFS file system Map/Reduce framework Hadoop utilities Summary BIG DATA FACTS In what timeframe
Introduction to Hadoop
Introduction to Hadoop Miles Osborne School of Informatics University of Edinburgh [email protected] October 28, 2010 Miles Osborne Introduction to Hadoop 1 Background Hadoop Programming Model Examples
Intro to Map/Reduce a.k.a. Hadoop
Intro to Map/Reduce a.k.a. Hadoop Based on: Mining of Massive Datasets by Ra jaraman and Ullman, Cambridge University Press, 2011 Data Mining for the masses by North, Global Text Project, 2012 Slides by
