NYC Taxi Trip and Fare Data Analytics using BigData



Similar documents
INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data

A Brief Outline on Bigdata Hadoop

Beyond Web Application Log Analysis using Apache TM Hadoop. A Whitepaper by Orzota, Inc.

Map Reduce & Hadoop Recommended Text:

Workshop on Hadoop with Big Data

Data Mining in the Swamp

Enhancing Massive Data Analytics with the Hadoop Ecosystem

INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE

W H I T E P A P E R. Deriving Intelligence from Large Data Using Hadoop and Applying Analytics. Abstract

On a Hadoop-based Analytics Service System

Big Data on Microsoft Platform

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh

Managing Cloud Server with Big Data for Small, Medium Enterprises: Issues and Challenges

Big Data Explained. An introduction to Big Data Science.

DATA MINING WITH HADOOP AND HIVE Introduction to Architecture

Hadoop and Map-Reduce. Swati Gore

Hadoop Big Data for Processing Data and Performing Workload

Big Data Analytics by Using Hadoop

Manifest for Big Data Pig, Hive & Jaql

Large scale processing using Hadoop. Ján Vaňo

Implement Hadoop jobs to extract business value from large and varied data sets

In-Memory Analytics for Big Data

BIG DATA CHALLENGES AND PERSPECTIVES

Big Data Too Big To Ignore

An Industrial Perspective on the Hadoop Ecosystem. Eldar Khalilov Pavel Valov

BIG DATA TECHNOLOGY. Hadoop Ecosystem

You should have a working knowledge of the Microsoft Windows platform. A basic knowledge of programming is helpful but not required.

Hadoop implementation of MapReduce computational model. Ján Vaňo

Analytics in the Cloud. Peter Sirota, GM Elastic MapReduce

Big Data and Analytics: Getting Started with ArcGIS. Mike Park Erik Hoel

BIG DATA TRENDS AND TECHNOLOGIES

Infomatics. Big-Data and Hadoop Developer Training with Oracle WDP

Indian Journal of Science The International Journal for Science ISSN EISSN Discovery Publication. All Rights Reserved

Managing Big Data with Hadoop & Vertica. A look at integration between the Cloudera distribution for Hadoop and the Vertica Analytic Database

Advanced Big Data Analytics with R and Hadoop

ESS event: Big Data in Official Statistics. Antonino Virgillito, Istat

Microsoft Big Data. Solution Brief

How To Handle Big Data With A Data Scientist

Native Connectivity to Big Data Sources in MSTR 10

Open source Google-style large scale data analysis with Hadoop

Hadoop. Sunday, November 25, 12

Hadoop 只 支 援 用 Java 開 發 嘛? Is Hadoop only support Java? 總 不 能 全 部 都 重 新 設 計 吧? 如 何 與 舊 系 統 相 容? Can Hadoop work with existing software?

Internals of Hadoop Application Framework and Distributed File System

International Journal of Engineering Research ISSN: & Management Technology November-2015 Volume 2, Issue-6

Big Data: Tools and Technologies in Big Data

ITG Software Engineering

Online Content Optimization Using Hadoop. Jyoti Ahuja Dec

Big Data With Hadoop

Role of Cloud Computing in Big Data Analytics Using MapReduce Component of Hadoop

Lecture 32 Big Data. 1. Big Data problem 2. Why the excitement about big data 3. What is MapReduce 4. What is Hadoop 5. Get started with Hadoop

Outline. What is Big data and where they come from? How we deal with Big data?

How To Scale Out Of A Nosql Database

Big Data and Analytics: A Conceptual Overview. Mike Park Erik Hoel

Hadoop Job Oriented Training Agenda

Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases. Lecture 15

Big Data Weather Analytics Using Hadoop

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

Data-Intensive Computing with Map-Reduce and Hadoop

Tapping Into Hadoop and NoSQL Data Sources with MicroStrategy. Presented by: Jeffrey Zhang and Trishla Maru

5 Keys to Unlocking the Big Data Analytics Puzzle. Anurag Tandon Director, Product Marketing March 26, 2014

#mstrworld. Tapping into Hadoop and NoSQL Data Sources in MicroStrategy. Presented by: Trishla Maru. #mstrworld

The Next Wave of Data Management. Is Big Data The New Normal?

Big Data Analytics on Cab Company s Customer Dataset Using Hive and Tableau

Jeffrey D. Ullman slides. MapReduce for data intensive computing

Hadoop Technology for Flow Analysis of the Internet Traffic

Generic Log Analyzer Using Hadoop Mapreduce Framework

Click Stream Data Analysis Using Hadoop

Big Data Architecture & Analytics A comprehensive approach to harness big data architecture and analytics for growth

Native Connectivity to Big Data Sources in MicroStrategy 10. Presented by: Raja Ganapathy

Introduction to Big data. Why Big data? Case Studies. Introduction to Hadoop. Understanding Features of Hadoop. Hadoop Architecture.

Hadoop Submitted in partial fulfillment of the requirement for the award of degree of Bachelor of Technology in Computer Science

VOL. 5, NO. 2, August 2015 ISSN ARPN Journal of Systems and Software AJSS Journal. All rights reserved

Architectures for Big Data Analytics A database perspective

Chase Wu New Jersey Ins0tute of Technology

A STUDY ON HADOOP ARCHITECTURE FOR BIG DATA ANALYTICS

Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related

EXPERIMENTATION. HARRISON CARRANZA School of Computer Science and Mathematics

Tap into Hadoop and Other No SQL Sources

Application Development. A Paradigm Shift

CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop)

Big Data Readiness. A QuantUniversity Whitepaper. 5 things to know before embarking on your first Big Data project

Scalable Cloud Computing Solutions for Next Generation Sequencing Data

Open source large scale distributed data management with Google s MapReduce and Bigtable

NoSQL and Hadoop Technologies On Oracle Cloud

Big Data and Scripting Systems build on top of Hadoop

Distributed Framework for Data Mining As a Service on Private Cloud

ISSN: CONTEXTUAL ADVERTISEMENT MINING BASED ON BIG DATA ANALYTICS

Hadoop Ecosystem B Y R A H I M A.

Big Data: Using ArcGIS with Apache Hadoop. Erik Hoel and Mike Park

ANALYTICS CENTER LEARNING PROGRAM

How To Analyze Log Files In A Web Application On A Hadoop Mapreduce System

Big Data: Study in Structured and Unstructured Data

Hadoop and Relational Database The Best of Both Worlds for Analytics Greg Battas Hewlett Packard

Next presentation starting soon Business Analytics using Big Data to gain competitive advantage

Introduction to Hadoop. New York Oracle User Group Vikas Sawhney

Alternatives to HIVE SQL in Hadoop File Structure

International Journal of Innovative Research in Computer and Communication Engineering

Transcription:

NYC Taxi Trip and Fare Data Analytics using BigData Umang Patel #1 # Department of Computer Science and Engineering University of Bridgeport, USA 1 umapatel@my.bridgeport.edu Abstract As there is an amassed evolution in the metropolitan zones, urban data are apprehended and have become certainly manageable for first-hand prospects for data driven analysis which can be recycled for an improvement of folks who lives in urban zone. This particular project highlights, the prevailing focus on the dataset of NYC taxi trips and fare. Traditionally the data captured from the NYC Taxi & Limousine commission was physically analysed by various analyst to find the superlative practice to follow and derives the output from it which would eventually aids the people who commute via taxis. Later during early 2000 the taxi services where exponentially developed and the data capture by NYC was in GB s, which was very difficult to analyse manually. To overcome these hitches BigData was under the limelight to analyse such a colossal dataset. There were around 180 million taxi ride in city of New York in 2014. BigData can effortlessly analyse the thousands of GB within a fractions seconds and expedite the process. This data can be analysed for several purposes like avoiding traffics, lower rate where services are not functioning more frequency than a cab on crown location and many more. This information can be used by numerous authorities and industries for their own purpose. Government official can use this data to deliver supplementary public transport service. The company like Uber can use this data for their own taxi service. I. INTRODUCTION Transportation has been proved as the most vital service in large cities. Diverse modes of transportation are accessible. In large cities in the United States and cities around the world, taxi mode of conveyance plays a foremost role and used as the best substitute for the general public use of transportation to get their necessities. For instance, by today in New York there are nearly [1] 50,000 vehicles and 100,000 drivers are existing in NYC Taxi and Limousine Commission. There are many misperceptions in [1] TLC (Taxi and Limousine Commission) of the New York city, that how the taxi services should be disseminated in the city that too based on certain assumptions, like most pickups, time, distance, airport timings. In order to provide very good taxi service and plan for effective integration in city transportation system, it s very important to analyse the demand for taxi transportation. The dataset provides the relating information such as where taxis are used, when taxis are used and factors which tend public to use taxi as divergent to other modes of transportation. In present day transportation dataset contains large quantity of information than the preceding data. For specimen, from the year 2010 TLC using the Global Positioning System(GPS) data type for every taxi trip, including the time and location (latitude and longitude) of the pickup and drop-off. In this a complete traffic data which contains nearly 180 million rows of data in the year 2014. Due to huge amount of data, this data is example of BigData. Using BigData it s easy to develop procedures to clean and process the data so it used for analyse the raw data into useful way in transportation service. The core objective of this is to analyse the factors for demand for taxis, to find the most pickups, drop-offs of public based on their location, time of most traffic and how to overcome the needs of the public. Explicitly, the key contributions of this paper are as follows: Primitively to the best of our knowledge, we conduct the analysis that recommends the top driver based on the most distance travelled, most fare collected, most time travelled and most efficient driver. It will help commission to award such drivers and encourage such drivers to get most out of them. Analysis was done with the help of MapReduce programming. Furthermore, to achieve our goal, we proposed our subsequent analysis i.e. Analysis on Region. Here we accumulate information which was allied with location like PickUp Latitude, PickUp Longitude, DropOff Latitude and DropOff Longitude. Expending PickUp Latitude and PickUp Longitude we appraise the most PickUp locations and same for the DropOff locations. This will help to provide more taxis on the most PickUp location and so on. Analysis was thru with the help of MapReduce programming To quantify the Total Pick-ups and Drop-offs by Time of Day based on location we used Hive to analyse it. The third analysis was based on the total PickUp and DropOff for a day per hour and location. For executing such intricate query, we used Hive which is a part of Hadoop ecosystem. The final output consists of the total PickUp or DropOff count for every hour of a day based on location. Ultimately, we analysed the fare to get the Drivers revenue. This analysis consists of both gross and net revenue to get the Driver Fare Revenue (Gross and Net). This analysis was prepared with the help of another Hadoop ecosystem technology called as Pig. The PigLatin language is used to write the query. Our evaluation effort is encyclopaedic. We test our all analysis on a real world dataset consisting of GPS records

from more than 14, 000 [2] taxicabs in a big metropolitan space with a population of more than 10 million. The rest of the paper is organized as follows. Section II introduces the related work. Section III proposes our main problem statements. Section IV depicts our Performance Evaluation. Section, followed by the conclusion in Section V II. RELATED WORKS Now we are going to fetch the section focusing Technologies which are used in analysing huge dataset. The complete analysis is analyzed using BigData with Hadoop and Hadoop Ecosystem. Following are the brief definition of the BigData, Hadoop, MapReduce and Hive and Pig a part of BigData Hadoop Ecosystem A. BigData Big data usually includes data sets with sizes beyond the ability of commonly used software tools to capture, curate, manage, and process data within a tolerable elapsed time. Big data "size" is a constantly moving target, as of 2012 ranging from a few dozen terabytes to many petabytes of data [8]. B. Hadoop Apache Hadoop is an open-source software framework written in Java for distributed storage and distributed processing of very large data sets on computer clusters built from commodity hardware. All the modules in Hadoop are designed with a fundamental assumption that hardware failures (of individual machines, or racks of machines) are commonplace and thus should be automatically handled in software by the framework [9]. C. MapReduce MapReduce is a programming model and an associated implementation for processing and generating large data sets with a parallel, distributed algorithm on a cluster. Conceptually similar approaches have been very well known since 1995 with the Message Passing Interface standard having reduce and scatter operations [10]. D. Hive Apache Hive is a data warehouse infrastructure built on top of Hadoop for providing data summarization, query, and analysis. While initially developed by Facebook, Apache Hive is now used and developed by other companies such as Netflix. Amazon maintains a software fork of Apache Hive that is included in Amazon Elastic MapReduce on Amazon Web Services [11]. E. Pig Pig is a high-level platform for creating MapReduce programs used with Hadoop. The language for this platform is called Pig Latin. Pig Latin abstracts the programming from the Java MapReduce idiom into a notation which makes MapReduce programming high level, similar to that of SQL for RDBMS systems. Pig Latin can be extended using UDF (User Defined Functions) which the user can write in Java, Python, JavaScript, Ruby or Groovy and then call directly from the language. Pig was originally developed at Yahoo Research around 2006 for researchers to have an ad-hoc way of creating and executing map-reduce jobs on very large data sets. In 2007, it was moved into the Apache Software Foundation [12]. Also, this section focus on earlier big data projects on NYC taxi dataset, namely to optimize taxi usage, and on big data infrastructures and applications for transport data events. Transdec (Demiryurek et al. 2010) is a project of the University of California to create a big data infrastructure adapted to transport. It s built on three tiers comparable to the MVC (Model, View, Controller) model for transport data [5] (Jagadish et al. 2014) propose a big data infrastructure based on five steps: data acquisition, data cleaning and information extraction, data integration and aggregation, big data analysis and data interpretation [7]. (Yuan et al. 2013), (Ge et al. 2010), (Lee et al. 2004) worked a transport project to help taxi companies optimize their taxi usage. They work on optimising the odds of a client needing a taxi to meet an empty taxi, optimizing travel time from taxi to clients, based on historical data collected from running taxis [6]. III. PROBLEM DEFINITION During this course of study about NYC taxi data sets consequence are entirely centred on certain analysis. These problem statements are elucidated clearly one after the other in the sequence A. Problem Definition I Analysis on Individual The problem definition I consists of analysis on individual driver. This analysis will be done using MapReduce programming. Following are the analysis which will be done using MapReduce programming: 1) Driver with most distance travelled: This analysis will be on individual basis where we will analyze and determine the driver travelled with most distance in miles. Mapper map(hack_license, Trip_distance) = Emit(inter_Hack_license, inter_trip_distance) reduce(inter_ Hack_license, inter_trip_distance) = Emit(Hack_license, Max_Trip_distance). 2) Driver with most fare collected: This analysis will be on individual basis where we will analyze and determine the drive collected most fare in dollars including fare amount, surcharges, mta tax, and tip amount i.e. total amount

map(hack_license, Total_amount) = Emit(inter_Hack_license, inter_total_amount) reduce(inter_ Hack_license, inter_total_amount) = Emit(Hack_license, Max_Total_amount). 3) Driver with most time travelled: This analysis will be on individual basis where we will analyze and determine the driver spend most travel time in seconds. map(hack_license, Trip_time) = Emit(inter_Hack_license, inter_trip_time) reduce(inter_ Hack_license, inter_trip_time) = Emit(Hack_license, Max_Trip_time). 4) Driver with most efficiency based on distance and time: This analysis will be on individual basis where we will analyze and determine the most efficient driver. It can be determine with distance / time with criteria of minimum distance travelled. 2) Most drop off location: This analysis will be based on region i.e. the latitude and longitude of the dropoff. So dropoff location is combination of dropoff_latitude and dropoff_longitude. map(dropoff_location) = Emit(inter_ Dropoff_location,1) reduce(inter_ Dropoff_location, 1) = Emit(Dropoff_location, Sum). C. Problem Definition III Analysis based on time and location This section will through some light on how we determine some complex analysis by using Hive. The hive is a part of Hadoop ecosystem which is a software that facilitates querying and managing large dataset using simple SQL like commands. It is built on top of the Hadoop. In this analysis we will determine average of total pickup and drop-offs by time in a day based on location. Fig 1 shows the Average Total Pick-ups and Drop-offs by Time of Day based on location. map(hack_license, Trip_distance / Trip_time) = Emit(inter_Hack_license, inter_trip_distance / inter_ Trip_time) reduce(inter_ Hack_license, inter_trip_distance / inter_ Trip_time) = Emit(Hack_license, Min_effeciency) B. Problem Definition II Analysis on Region The problem definition II consists of analysis on a region. This analysis will be done using MapReduce programming. Following are the analysis which will be done using MapReduce programming: 1) Most pick up location: This analysis will be based on region i.e. the latitude and longitude of the pickup. So pickup location is combination of Pickup_latitude and Pickup_longitude. map(pickup_location) = Emit(inter_Pickup_location,1) reduce(inter_pickup_location, 1) = Emit(Pickup_location, Sum). Fig. 1. Average Total Pick-ups and Drop-offs by Time of Day based on location [2]

Fig. 2 and Fig. 3 shows an example of actual outputs generated from the output of the Hive query Fig. 2. Average Total Drop-offs by Time of Day based on location Fig. 3. Average Total Pick-ups by Time of Day based on location

D. Problem Definition III Analysis based on Fare In this we will determine some complex analysis using Pig. The pig is a part of Hadoop ecosystem which is a platform that facilitates MapReduce program used with hadoop and the language used by this platform is called as Pig Lation. It is built on top of the Hadoop. Basically Pig Latin abstracts the programming from Java MapReduce expression into a notation which makes MapReduce programming high level, similar to that of SQL in RDBMS. The 10 lines of pig is similar to 200 lines of java code. In this analysis we will determine the average revenue per hour which includes both Gross revenue and Net revenue. Fig.4 shows Average Driver Fare Revenue per hour (Gross and Net). TABLE I PERFORMANCE EVALUATION USING MAPREDUCE AND HIVE Problem Definition Hive Problem Definition 1.1 15 20 Problem Definition 1.2 12 15 Problem Definition 1.3 19 22 Problem Definition 1.4 35 45 Problem Definition 2.1 8 10 Problem Definition 2.2 8 9 Time In Seconds MapReduce V. CONCLUSION In this assignment, we generated new system that ropes in the visual exploration of big origin-destination and spatiotemporal data. The most vital section of this project is a visual query model that sanctions the users to quickly select data slices and use them. This project will be helpful for many industries in the near future because of its virtuous balance between simplicity and expressiveness. By associating this data with other sources of information about neighborhood needs, employment, and model to explain the chronological difference of travel demand for taxis. The Fig. 5 shows the location with most pickup and dropoff. Fig. 4. Average Driver Fare Revenue per hour (Gross and Net) [2] IV. PERFORMANCE EVALUATION In this section, we are evaluated performance between MapReduce and Hive of all the Problem definition of defined in Problem Definition 1 and Problem Definition 2. The result was as expected MapReduce runs faster than hive. But when it comes to complex query like in Problem Definition 3 and Problem Definition 4 than hive is preferred as it is easy to write and understand. Fig. 4. Location with most frequent locations in NYC The first analysis i.e. Analysis on Individual can helpful for determining the ability of the individual driver. We can determine the ability like efficient, quick, accuracy etc. which can be helpful in evaluating the individual and reward the individual who is doing great work or train the individual who is struggling to do the good work. In second analysis we determine which region has highest pickup and drop-off

location, it helps vendor to provide more taxis where there is more pickup and lessen the number of taxi where there is more drop-off. In this way system will work more efficiently. In the third analysis, we determine the Average Total Pick-ups and Drop-offs by Time of Day based on location. Using this analysis, we will determine the number of pickup and drop off location in a particular region for a particular time. This will be helpful for providing more number of taxis in a region. In the last analysis Average Driver Fare Revenue per hour (Gross and Net) will help us in deriving the business is in profit or loss. Currently only few queries can be analyze but in future this data can be more analyze to get more benefits from this data. We can also analyze on trips between Penn Station and the three nearest airports in the NYC region to show how mode choice is affected by the size of the traveling group, the travelers valuation of time, and the time of day. Ultimately, we conclude the usefulness of these project of trip provides planners, engineers, and decision makers with information about how people use the transportation system. In this case, by identifying the factors that drive taxi demand, forecasts can be made about how this demand can be expected to grow and change as neighborhoods evolve. As decisions are made regarding the regulation of the taxi industry, the provision of transit service, and urban development, these models are useful for forming a complete and holistic vision of how travel patterns and use of modes can be expected to respond. [11] Hive, https://en.wikipedia.org/wiki/apache_hive [Accessed [12] Pig, https://en.wikipedia.org/wiki/pig_(programming_tool) [Accessed ACKNOWLEDGMENT The special vote of Thanks to the Taxi & Commission of New York City for offering the data capitalised in this particular paper. Our second most Important Thanks to Prof. Jeongkyu Lee and the Department of Computer Science and Engineering for their incessant support. REFERENCES [1] NYC Taxi & Limousine Comission. http://www.nyc.gov/html/tlc/html/about/about.shtml. [2] Taxicab fact book http://www.nyc.gov/html/tlc/downloads/pdf/2014_taxicab_fact_book.p df [3] S. Agarwal, B. Mozafari, A. Panda, H. Milner, S. Madden, and I. Stoica. Blinkdb: Queries with bounded errors and bounded response times on very large data. In Proc. EuroSys, pages 29 42, 2013. [4] G. Andrienko, N. Andrienko, C. Hurter, S. Rinzivillo, and S. Wrobel. From movement tracks through events to places: Extracting and characterizing significant places from mobility data. In Visual Analytics Science and Technology (VAST), 2011 IEEE Conference on, pages 161 170. IEEE, 2011. [5] Demiryurek, U., Banaei-Kashani, F. & Shahabi, C., 2010. TransDec:A Spatiotemporal Query Processing Framework for Transportation Systems. IEEE, pp.1197 1200. [6] Ge, Y. et al., 2010. An energy-efficient mobile recommender system. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD 10. N [7] Jagadish, H.V. et al., 2014. Big Data and Its Technical Challenges [8] BigData, https://en.wikipedia.org/wiki/big_data [Accessed [9] Hadoop, https://en.wikipedia.org/wiki/apache_hadoop [Accessed [10] MapReduce, https://en.wikipedia.org/wiki/mapreduce [Accessed