The Flink Big Data Analytics Platform. Marton Balassi, Gyula Fora" {mbalassi, gyfora}@apache.org



Similar documents
Apache Flink Next-gen data analysis. Kostas

Apache Flink. Fast and Reliable Large-Scale Data Processing

The Stratosphere Big Data Analytics Platform

Unified Big Data Analytics Pipeline. 连 城

Big Data Analytics. Lucas Rego Drumond

Beyond Hadoop with Apache Spark and BDAS

Architectures for massive data management

Hadoop2, Spark Big Data, real time, machine learning & use cases. Cédric Carbone Twitter

Spark in Action. Fast Big Data Analytics using Scala. Matei Zaharia. project.org. University of California, Berkeley UC BERKELEY

Moving From Hadoop to Spark

Spark: Making Big Data Interactive & Real-Time

Scaling Out With Apache Spark. DTL Meeting Slides based on

Hadoop & Spark Using Amazon EMR

Unified Big Data Processing with Apache Spark. Matei

Hadoop Ecosystem Overview. CMSC 491 Hadoop-Based Distributed Computing Spring 2015 Adam Shook

CS555: Distributed Systems [Fall 2015] Dept. Of Computer Science, Colorado State University

How Companies are! Using Spark

Real Time Data Processing using Spark Streaming

Hadoop Ecosystem B Y R A H I M A.

Lambda Architecture. Near Real-Time Big Data Analytics Using Hadoop. January Website:

Fast and Expressive Big Data Analytics with Python. Matei Zaharia UC BERKELEY

Case Study : 3 different hadoop cluster deployments

Big Data and Apache Hadoop s MapReduce

brief contents PART 1 BACKGROUND AND FUNDAMENTALS...1 PART 2 PART 3 BIG DATA PATTERNS PART 4 BEYOND MAPREDUCE...385

Architectures for massive data management

Introduction to Spark

Big Data Frameworks: Scala and Spark Tutorial

Big Data Research in Berlin BBDC and Apache Flink

Apache Spark : Fast and Easy Data Processing Sujee Maniyam Elephant Scale LLC sujee@elephantscale.com

Introduction to Big Data! with Apache Spark" UC#BERKELEY#

Apache Spark 11/10/15. Context. Reminder. Context. What is Spark? A GrowingStack

Outline. High Performance Computing (HPC) Big Data meets HPC. Case Studies: Some facts about Big Data Technologies HPC and Big Data converging

Spark and Shark. High- Speed In- Memory Analytics over Hadoop and Hive Data

How To Create A Data Visualization With Apache Spark And Zeppelin

Spark and the Big Data Library

Trend Micro Big Data Platform and Apache Bigtop. 葉 祐 欣 (Evans Ye) Big Data Conference 2015

Spark ΕΡΓΑΣΤΗΡΙΟ 10. Prepared by George Nikolaides 4/19/2015 1

Systems Engineering II. Pramod Bhatotia TU Dresden dresden.de

Hadoop: The Definitive Guide

Designing Agile Data Pipelines. Ashish Singh Software Engineer, Cloudera

Upcoming Announcements

Architectural patterns for building real time applications with Apache HBase. Andrew Purtell Committer and PMC, Apache HBase

Clash of the Titans: MapReduce vs. Spark for Large Scale Data Analytics

Spark. Fast, Interactive, Language- Integrated Cluster Computing

Comprehensive Analytics on the Hortonworks Data Platform

Developing Scalable Smart Grid Infrastructure to Enable Secure Transmission System Control

International Journal of Advancements in Research & Technology, Volume 3, Issue 2, February ISSN

Conquering Big Data with BDAS (Berkeley Data Analytics)

Kafka & Redis for Big Data Solutions

Big Data Analytics with Spark and Oscar BAO. Tamas Jambor, Lead Data Scientist at Massive Analytic

Data Science in the Wild

Programming Hadoop 5-day, instructor-led BD-106. MapReduce Overview. Hadoop Overview

Conjugating data mood and tenses: Simple past, infinite present, fast continuous, simpler imperative, conditional future perfect

HiBench Introduction. Carson Wang Software & Services Group

Analytics on Spark &

Introduc8on to Apache Spark

ITG Software Engineering

SPARK USE CASE IN TELCO. Apache Spark Night ! Chance Coble!

Beyond Lambda - how to get from logical to physical. Artur Borycki, Director International Technology & Innovations

Introduction to Big Data with Apache Spark UC BERKELEY

The Top 10 7 Hadoop Patterns and Anti-patterns. Alex

Big Data and Scripting Systems beyond Hadoop

Apache Ignite TM (Incubating) - In- Memory Data Fabric Fast Data Meets Open Source

Hadoop and Map-Reduce. Swati Gore

The Internet of Things and Big Data: Intro

Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware

Welcome to the first Workshop on Big data Open Source Systems (BOSS)

Lambda Architecture for Batch and Real- Time Processing on AWS with Spark Streaming and Spark SQL. May 2015

BIG DATA ARCHITECTURE AND ANALYTICS BIG DATA STRATEGIES FOR BUSINESS GROWTH

Big Data at Spotify. Anders Arpteg, Ph D Analytics Machine Learning, Spotify

HDP Hadoop From concept to deployment.

Hadoop IST 734 SS CHUNG

Building Scalable Big Data Infrastructure Using Open Source Software. Sam William

Next-Gen Big Data Analytics using the Spark stack

HADOOP. Revised 10/19/2015

Hadoop MapReduce and Spark. Giorgio Pedrazzi, CINECA-SCAI School of Data Analytics and Visualisation Milan, 10/06/2015

Monitis Project Proposals for AUA. September 2014, Yerevan, Armenia

Other Map-Reduce (ish) Frameworks. William Cohen

Pro Apache Hadoop. Second Edition. Sameer Wadkar. Madhu Siddalingaiah

Ali Ghodsi Head of PM and Engineering Databricks

Spark: Cluster Computing with Working Sets

Streaming items through a cluster with Spark Streaming

BIG DATA TRENDS AND TECHNOLOGIES

TRAINING PROGRAM ON BIGDATA/HADOOP

Big Data Approaches. Making Sense of Big Data. Ian Crosland. Jan 2016

Pilot-Streaming: Design Considerations for a Stream Processing Framework for High- Performance Computing

CSE-E5430 Scalable Cloud Computing Lecture 11

The Big Data Ecosystem at LinkedIn. Presented by Zhongfang Zhuang

Cloudera Certified Developer for Apache Hadoop

Big Data and Scripting Systems build on top of Hadoop

Cloud Application Development (SE808, School of Software, Sun Yat-Sen University) Yabo (Arber) Xu

Writing Standalone Spark Programs

Apache Spark and Distributed Programming

COURSE CONTENT Big Data and Hadoop Training

Big Data Analytics! Architectures, Algorithms and Applications! Part #3: Analytics Platform

Pulsar Realtime Analytics At Scale. Tony Ng April 14, 2015

Transcription:

The Flink Big Data Analytics Platform Marton Balassi, Gyula Fora" {mbalassi, gyfora}@apache.org

What is Apache Flink? Open Source Started in 2009 by the Berlin-based database research groups In the Apache Incubator since mid 2014 61 contributors as of the 0.7.0 release Fast + Reliable Fast, general purpose distributed data processing system Combining batch and stream processing Up to 100x faster than Hadoop Ready to use Programming APIs for Java and Scala Tested on large clusters 2

What is Apache Flink? Analytical Program Worker Flink Client & Op0mizer Master Flink Cluster Worker 3

This Talk Introduction to Flink API overview Distinguishing Flink Flink from a user perspective Performance Flink roadmap and closing 4

Open source Big Data Landscape Applica3ons Hive Cascading Mahout Pig Crunch Data processing engines MapReduce Spark Storm Flink Tez App and resource management Yarn Mesos Storage, streams HDFS HBase KaCa 5

Flink stack Scala API (batch) Java API (streaming) Apache MRQL Python API Java API (batch) Graph API ( Spargel ) Common API Batch Java Collec0ons Flink Op0mizer Apache Tez Streams Builder Hybrid Batch/Streaming Run0me Streaming Storage Streams Files HDFS S3 JDBC Azure KaCa Rabbit MQ Redis Local Execu0on Cluster Manager Na0ve YARN EC2 6

Flink APIs

Programming model Data abstractions: Data Set, Data Stream Program DataSet/Stream A X DataSet/Stream B Y DataSet/Stream C Operator X Operator Y Parallel Execution A (1) X B (1) Y C (1) A (2) X B (2) Y C (2) 8

Flexible pipelines Map, FlatMap, MapPartition, Filter, Project, Reduce, ReduceGroup, Aggregate, Distinct, Join, CoGoup, Cross, Iterate, Iterate Delta, Iterate-Vertex-Centric, Windowing Source Map Reduce Iterate Join Reduce Sink Source Map 9

WordCount, Java API DataSet<String> text = env.readtextfile(input); DataSet<Tuple2<String, Integer>> result = text.flatmap((str, out) -> { for (String token : value.split("\\w")) { out.collect(new Tuple2<>(token, 1)); }).groupby(0).sum(1); 10

WordCount, Scala API val input = env.readtextfile(input); val words = input flatmap { line => line.split("\\w+") } val counts = words groupby { word => word } count() 11

WordCount, Streaming API DataStream<String> text = env.readtextfile(input); DataStream<Tuple2<String, Integer>> result = text.flatmap((str, out) -> { for (String token : value.split("\\w")) { out.collect(new Tuple2<>(token, 1)); }).groupby(0).sum(1); 12

Is there anything beyond WordCount? 13

Beyond Key/Value Pairs // outputs pairs of pages and impressions class Impression { public String url; public long count; } class Page { public String url; public String topic; } DataSet<Page> pages =...; DataSet<Impression> impressions =...; DataSet<Impression> aggregated = impressions.groupby("url").sum("count"); pages.join(impressions).where("url").equalto("url").print() // outputs pairs of matching pages and impressions 14

Preview: Logical Types DataSet<Row> dates = env.readcsv(...).as("order_id", "date"); DataSet<Row> sessions = env.readcsv(...).as("id", "session"); DataSet<Row> joined = dates.join(session).where("order_id").equals("id"); joined.groupby("date").reducegroup(new SessionFilter()) class SessionFilter implements GroupReduceFunction<SessionType> { } public void reduce(iterable<sessiontype> value, Collector out){... } public class SessionType { public String order_id; public Date date; public String session; } 15

Distinguishing Flink

Hybrid batch/streaming runtime Batch and stream processing in the same system No micro-batches, unified runtime Competitive performance Code reusable from batch processing to streaming, making development and testing a piece-of-cake 17

Flink Streaming Most Data Set operators are also available for Data Streams Temporal and streaming specific operators Window/mini-batch operators Window join, cross etc. Support for iterative stream processing Connectors for different data sources Kafka, Flume, RabbitMQ, Twitter etc. 18

Flink Streaming //Build new model on every second of new data DataStream<Double[]> model= env.addsource(new TrainingDataSource()).window(1000).reduceGroup(new ModelBuilder()); //Predict new data using the most up-to-date model DataStream<Integer> prediction = env.addsource(new NewDataSource()).connect(model).map(new Predictor()); 19

Lambda architecture Source: https://www.mapr.com/developercentral/lambda-architecture 20

Lambda architecture in Flink 21

Dependability Flink manages its own memory Caching and data processing happens in a dedicated memory fraction System never breaks the JVM heap, gracefully spills User Code Hashing/Sor0ng/Caching Shuffles/Broadcasts (next version unifies network buffers and managed heap) Unmanaged Heap Flink Managed Heap Network Buffers JVM Heap 22

Operating on Serialized Data serializes data every time Highly robust, never gives up on you works on objects, RDDs may be stored serialized Serialization considered slow, only when needed makes serialization really cheap: partial deserialization, operates on serialized form Efficient and robust! 23

Operating on Serialized Data Microbenchmark Sorting 1GB worth of (long, double) tuples 67,108,864 elements Simple quicksort 24

Memory Management public class WC { public String word; public int count; } empty page Pool of Memory Pages Works on pages of bytes, maps objects transparently Full control over memory, out-of-core enabled Algorithms work on binary representation Address individual fields (not deserialize whole object) Move memory between operations 25

Flink from a user perspective

Flink programs run everywhere Local Debugging As Java Collec0on Programs Embedded (e.g., Web Container) Cluster (Batch) Fink Run3me or Apache Tez Cluster (Streaming) 27

Migrate Easily Flink out-of-the-box supports Hadoop data types (writables) Hadoop Input/Output Formats Hadoop functions and object model Input Map Reduce Output S DataSet Red DataSet Join DataSet Output Input DataSet Map DataSet 28

Little tuning or configuration required Requires no memory thresholds to configure Flink manages its own memory Requires no complicated network configs Pipelining engine requires much less memory for data exchange Requires no serializers to be configured Flink handles its own type extraction and data representation Programs can be adjusted to data automatically Flink s optimizer can choose execution strategies automatically 29

Understanding Programs Visualizes the operations and the data movement of programs Analyze after execution Screenshot from Flink s plan visualizer 30

Understanding Programs Analyze after execution (times, stragglers, ) 31

Iterations in other systems Client Loop outside the system Step Step Step Step Step Client Loop outside the system Step Step Step Step Step 32

Iterations in Flink Streaming dataflow with feedback red. map join join System is iteration-aware, performs automatic optimization 33

Automatic Optimization for Iterative Programs Pushing work out of the loop Caching Loop-invariant Data Maintain state as index 34

Performance

Distributed Grep 1 TB of data (log files) 24 machines with 32 GB Memory Regular HDDs HDFS 2.4.0 Flink 0.7-incubating- SNAPSHOT Spark 1.2.0-SNAPSHOT HDFS Filter Term 1 Filter Term 2 Filter Term 3 Matches Matches Matches Flink up to 2.5x faster 36

Distributed Grep: Flink Execution 37

Distributed Grep: Spark Execution Spark executes the job in 3 stages: Stage 1 HDFS Filter Term 1 Matches Stage 2 HDFS Filter Term 2 Matches Stage 3 HDFS Filter Term 3 Matches 38

Spark in-memory pinning Cache in-memory to avoid reading the data for each filter Filter Term 1 Matches HDFS In- Memory RDD Filter Term 2 Filter Term 3 Matches Matches JavaSparkContext sc = new JavaSparkContext(conf); JavaRDD<String> file = sc.textfile(infile).persist(storagelevel.memory_and_disk()); for(int p = 0; p < patterns.length; p++) { final String pattern = patterns[p]; JavaRDD<String> res = file.filter(new Function<String, Boolean>() { }); } 39

Spark in-memory pinning 37 minutes RDD is 100% inmemory 9 minutes Spark starts spilling RDD to disk 40

PageRank Dataset: Twitter Follower Graph 41,652,230 vertices (users) 1,468,365,182 edges (followings) 12 GB input data 41

PageRank results 42

Why is there a difference? Lets have a look at the iteration times: Flink average: 48 sec., Spark average: 99 sec. 43

PageRank on Flink with Delta Iterations The algorithm runs 60 iterations until convergence (runtime includes convergence check) 100 minutes 8.5 minutes 44

Again, explaining the difference On average, a (delta) interation runs for 6.7 seconds Flink (Bulk): 48 sec. Spark (Bulk): 99 sec. 45

Streaming performance 46

Flink Roadmap Flink has a major release every 3 months, with >=1 big-fixing releases in-between Finer-grained fault tolerance Logical (SQL-like) field addressing Python API Flink Streaming, Lambda architecture support Flink on Tez ML on Flink (e.g., Mahout DSL) Graph DSL on Flink and more 47

48

http://flink.incubator.apache.org github.com/apache/incubator-flink @ApacheFlink Marton Balassi, Gyula Fora" {mbalassi, gyfora}@apache.org