z/os Hybrid Batch Processing and Big Data



Similar documents
z/os Hybrid Batch Processing and Big Data Session zba07

Using z/os to Access a Public Cloud - Beyond the Hand-Waving

How To Run A Hybrid Batch Processing Job On An Z/Os V1.13 V1 (V1) V1-R13 (V2) V2.1 (R13) V3.1.1

Using SFTP on the z/os Platform

RHadoop Installation Guide for Red Hat Enterprise Linux

HDFS. Hadoop Distributed File System

Big Data Too Big To Ignore

Configuring and Tuning SSH/SFTP on z/os

BIG DATA HANDS-ON WORKSHOP Data Manipulation with Hive and Pig

RHadoop and MapR. Accessing Enterprise- Grade Hadoop from R. Version 2.0 (14.March.2014)

Using distributed technologies to analyze Big Data

Hadoop 只 支 援 用 Java 開 發 嘛? Is Hadoop only support Java? 總 不 能 全 部 都 重 新 設 計 吧? 如 何 與 舊 系 統 相 容? Can Hadoop work with existing software?

Red Hat Enterprise Linux OpenStack Platform 7 OpenStack Data Processing

Teradata Connector for Hadoop Tutorial

Introduction to Big data. Why Big data? Case Studies. Introduction to Hadoop. Understanding Features of Hadoop. Hadoop Architecture.

OLH: Oracle Loader for Hadoop OSCH: Oracle SQL Connector for Hadoop Distributed File System (HDFS)

Complete Java Classes Hadoop Syllabus Contact No:

Constructing a Data Lake: Hadoop and Oracle Database United!

Infomatics. Big-Data and Hadoop Developer Training with Oracle WDP

Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data

SAS 9.4 In-Database Products

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh

COURSE CONTENT Big Data and Hadoop Training

Using The Hortonworks Virtual Sandbox

Apache Sqoop. A Data Transfer Tool for Hadoop

INTEGRATING R AND HADOOP FOR BIG DATA ANALYSIS

MarkLogic Server. MarkLogic Connector for Hadoop Developer s Guide. MarkLogic 8 February, 2015

CA Workload Automation Agent for Databases

Co:Z SFTP - User's Guide

ITG Software Engineering

HADOOP ADMINISTATION AND DEVELOPMENT TRAINING CURRICULUM

Cross Platform Performance Monitoring with RMF XP

Volume SYSLOG JUNCTION. User s Guide. User s Guide

Hadoop for MySQL DBAs. Copyright 2011 Cloudera. All rights reserved. Not to be reproduced without prior written consent.

Extreme Computing. Hadoop. Stratis Viglas. School of Informatics University of Edinburgh Stratis Viglas Extreme Computing 1

Hadoop Hands-On Exercises

Lecture 10: HBase! Claudia Hauff (Web Information Systems)!

Lucid Key Server v2 Installation Documentation.

IBM Ported Tools for z/os: OpenSSH - Using Key Rings

Hadoop Basics with InfoSphere BigInsights

COSC 6397 Big Data Analytics. 2 nd homework assignment Pig and Hive. Edgar Gabriel Spring 2015

Using Symantec NetBackup with Symantec Security Information Manager 4.5

The Hadoop Eco System Shanghai Data Science Meetup

Big Business, Big Data, Industrialized Workload

Peers Techno log ies Pv t. L td. HADOOP

A Study of Data Management Technology for Handling Big Data

Java on z/os. Agenda. Java runtime environments on z/os. Java SDK 5 and 6. Java System Resource Integration. Java Backend Integration

Leveraging SAP HANA & Hortonworks Data Platform to analyze Wikipedia Page Hit Data

TP1: Getting Started with Hadoop

Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee

Hadoop Job Oriented Training Agenda

Internals of Hadoop Application Framework and Distributed File System

z/os UNIX for all Vit Gottwald CA Technologies March 15, 2012 Session Number 10983

Control-M for Hadoop. Technical Bulletin.

Big Data Strategies with IMS

Secure Database Backups with SecureZIP

Kognitio Technote Kognitio v8.x Hadoop Connector Setup

Data Domain Profiling and Data Masking for Hadoop

BIG DATA HADOOP TRAINING

Introduction to Hadoop

Important Notice. (c) Cloudera, Inc. All rights reserved.

UNIX Remote Job Entry User s Guide A. L. Sabsevitz K. A. Kelleman

Programming Hadoop 5-day, instructor-led BD-106. MapReduce Overview. Hadoop Overview

SWsoft Plesk 8.3 for Linux/Unix Backup and Restore Utilities

MySQL and Hadoop: Big Data Integration. Shubhangi Garg & Neha Kumari MySQL Engineering

Workshop on Hadoop with Big Data

INASP: Effective Network Management Workshops

SWsoft Plesk 8.2 for Linux/Unix Backup and Restore Utilities. Administrator's Guide

Scheduling in SAS 9.4 Second Edition

Integrating VoltDB with Hadoop

Managing Server Component Output from WebSphere Application Server on z/os A WebSphere on z/os exclusive!

More about Continuous Integration:

Linux Syslog Messages in IBM Director

Extreme computing lab exercises Session one

Quick Deployment Step-by-step instructions to deploy Oracle Big Data Lite Virtual Machine

ESS event: Big Data in Official Statistics. Antonino Virgillito, Istat

IBM Smart Cloud guide started

ITG Software Engineering

Connection Broker Managing User Connections to Workstations, Blades, VDI, and more. Security Review

THE ATLAS DISTRIBUTED DATA MANAGEMENT SYSTEM & DATABASES

Implement Hadoop jobs to extract business value from large and varied data sets


Agenda. ! Strengths of PostgreSQL. ! Strengths of Hadoop. ! Hadoop Community. ! Use Cases

Hadoop Installation MapReduce Examples Jake Karnes

Chase Wu New Jersey Ins0tute of Technology

VMware vcenter Log Insight Security Guide

Hadoop. Apache Hadoop is an open-source software framework for storage and large scale processing of data-sets on clusters of commodity hardware.

CICS Transactions Measurement with no Pain

Hadoop Big Data for Processing Data and Performing Workload

A Scalable Data Transformation Framework using the Hadoop Ecosystem

Extended Attributes and Transparent Encryption in Apache Hadoop

<Insert Picture Here> Big Data

IBM WebSphere Application Server Version 7.0

Introduction to HDFS. Prasanth Kothuri, CERN

How to Install and Configure EBF15328 for MapR or with MapReduce v1

Pro Apache Hadoop. Second Edition. Sameer Wadkar. Madhu Siddalingaiah

Transcription:

z/os Hybrid Batch Processing and Big Data Stephen Goetze Kirk Wolf Dovetailed Technologies, LLC Thursday, August 7, 2014: 1:30 PM-2:30 PM Session 15496 Insert Custom Session QR if Desired. www.dovetail.com

Trademarks Co:Z is a registered trademark of Dovetailed Technologies, LLC z/os, DB2, zenterprise and zbx are registered trademarks of IBM Corporation Oracle and Java are registered trademarks of Oracle and/or its affiliates Hadoop, HDFS, Hive, HBase and Pig are either registered trademarks or trademarks of the Apache Software Foundation Linux is a registered trademark of Linus Torvalds 2

Agenda Define Hybrid Batch Processing Hello World Example Security Considerations Hybrid Batch Processing and Big Data Processing z/os syslog data with Hive Processing z/os DB2 data with RHadoop Summary / Questions 3

zenterprise Hybrid Computing Models Well Known: zbx/zlinux as user-facing edge, web and application servers z/os provides back-end databases and transaction processing zbx as special purpose appliances or optimizers DB2 Analytics Accelerator DataPower Another Model: z/os Hybrid Batch zbx/zlinux/linux/windows integrated with z/os batch 4

z/os Hybrid Batch Processing 1. The ability to execute a program or script on a virtual server from a z/os batch job step 2. The target program may already exist and should require little or no modification 3. The target program s input and output are redirected from/to z/os spool files or datasets 4. The target program may easily access other z/os resources: DDs, data sets, POSIX files and programs 5. The target program s exit code is adopted as the z/os job step condition code Data security governed by SAF (RACF/ACF2/TSS) Requires new enablement software 5

Co:Z Co-Processing Toolkit Implements z/os Hybrid Batch model Co:Z Launcher starts a program on a target server and automatically redirects the standard streams back to jobstep DDs The target program can use Co:Z DatasetPipes commands to reach back into the active jobstep and access z/os resources: fromdsn/todsn read/write a z/os DD or data set fromfile/tofile read/write a z/os Unix file cozclient run z/os Unix command Free (commercial support licenses are available) Visit http://dovetail.com for details 6

zenterprise Hybrid Batch Processing 7

PureData Hybrid Batch Processing 8

Hybrid Batch Hello World Simple example illustrating the principles of Hybrid Batch Processing Launch a process on a remote Linux server Write a message to stdout In a pipeline: Read the contents of a dataset from a jobstep DD Compress the contents using the Linux gzip command Write the compressed data to the z/os Unix file system Exit with a return code that sets the jobstep CC 9

Linux echo Hello $(uname)! fromdsn b DD:INPUT 2 gzip c tofile b /tmp/out.gz exit 4 3 4 z/os //HYBRIDZ JOB () //RUN EXEC PROC=COZPROC, // ARGS='u@linux' //COZLOG DD SYSOUT=* //STDOUT DD SYSOUT=* //INPUT DD DSN=MY.DATA //STDIN DD * 4 // RC = 4 1 5 /tmp/out.gz Copyright 2014 Dovetailed Technologies, LLC 11

Hello World: Hybrid Batch 1. A script is executed on a virtual server from a z/os batch job step 2. The script uses a program that already exists -- gzip 3. Script output is redirected to z/os spool 4. z/os resources are easily accessed using fromdsn, tofile, etc 5. The script exit code is adopted as the z/os job step CC 12

Hello World DD:STDOUT Hello Linux! 13

Hello World DD:COZLOG CoZLauncher[N]: version: 2.2.0 2012-09-01 cozagent[n]: version: 1.1.0 2012-03-16 fromdsn(dd:stdin)[n]: 5 records/400 bytes read fromdsn(dd:input)[n]: 78 records/6240 bytes read tofile(/tmp/out.gz)[n]: 1419 bytes written todsn(dd:stdout)[n]: 13 bytes written todsn(dd:stderr)[n]: 0 bytes written CoZLauncher[E]: u@linux target ended with RC=4 14

Hello World DD:JESMSGLG JOB01515 ---- FRIDAY, 7 SEPT 2012 ---- JOB01515 IRR010I USERID GOETZE IS ASSIG JOB01515 ICH70001I GOETZE LAST ACCESS AT JOB01515 $HASP373 HYBRIDZ STARTED INIT JOB01515 - JOB01515 -STEPNAME PROCSTEP RC EXCP JOB01515 -RUN COZLNCH 04 1345 JOB01515 -HYBRIDZ ENDED. NAME- JOB01515 $HASP395 HYBRIDZ ENDED 15

Co:Z Hybrid Batch Network Security is Trusted OpenSSH is used for network security IBM Ported Tools OpenSSH client on z/os OpenSSH sshd on target system By default, data transfer is tunneled (encrypted) over the ssh connection Optionally, data can be transferred over raw sockets (option: ssh-tunnel=false) This offers very high performance without encryption costs Ideal for a secure network, such as zenterprise HiperSockets or IEDN 16

Co:Z Hybrid Batch Data Security is z/os Centric All z/os resource access is through the job step: Controlled by SAF (RACF/ACF2/TSS) Normal user privileges Storing remote user credentials in SAF digital certificates can extend the reach of the z/os security envelope to the target system Shared certificate access enables multiple authorized z/os users to use a single target system id Dataset Pipes streaming technology can be used to reduce data at rest 17

Bash Process Substitution Make a command appear as a file: <(cmd) cmd appears as a readable /dev/fd/nn >(cmd) cmd appears as a writable /dev/fd/nn Example: cat <(ls al) behaves like this: mkfifo /dev/fd/63 ls al > /dev/fd/63 & cat /dev/fd/63 rm /dev/fd/63 Very handy for enabling data in flight in hybrid batch processing using fromdsn and todsn 18

z/os //APPINT JOB (),'COZ',MSGCLASS=H,NOTIFY=&SYSUID //CUSTDATA EXEC PGM=CUSTCOB //OUTDD DD DSN=&&DATA,DISP=(NEW,PASS), // UNIT=SYSDA,SPACE=(CYL,(20,20)) //COZLOAD EXEC PROC=COZPROC,ARGS='u@linux' //PARMS DD DSN=HLQ.ORACLE.PARMS,DISP=SHR //CUSTDATA DD DSN=&&DATA,DISP=(OLD,DELETE) //CUSTCTL DD DSN=HLQ.CUST.CTL,DISP=SHR //CUSTLOG DD SYSOUT=* //STDIN DD * Linux on z / zbx sqlldr control=<(fromdsn DD://CUSTCTL), \ data=<(fromdsn DD://CUSTDATA), \ parfile=<(fromdsn DD://PARMS), \ log=>(todsn DD://CUSTLOG) 19

Process Substitution Summary File centric utilities like sqlldr can be used without modification Facilitates concurrent transfer and loading: No data at rest! High performance Operations can observe real-time job output in the JES spool DatasetPipes commands combined with process substitution allow the SAF security envelope to be extended to the remote system 20

Big Data and z/os z/os systems often have the Big Data we want to analyze Very large DB2 instances Very large Data sets But, the Hadoop ecosystem is not well suited to z/os Designed for a cluster of many small relatively inexpensive computers Although Hadoop is Java centric, several tools (e.g. R) don t run on z/os z/os compute and storage costs are high Hybrid Batch Processing offers a solution Single SAF profile for a security envelope extending to the BigData environment Exploitation of high speed network links (HiperSockets, IEDN) z/os centric operational control 21

Co:Z Toolkit and Big Data The Co:Z Launcher and Dataset Pipes utilities facilitate: Loading HDFS with z/os data DB2 VSAM, Sequential Data sets Unix System Services POSIX files Map Reduce Analysis Drive Hive, Pig, RHadoop, etc with scripts maintained on z/os Monitor progress in the job log Move results to z/os Job spool DB2 Data sets POSIX files 22

Processing z/os syslog data with Hive Connect z/os Unix System Services file system syslog data and Hadoop Illustrate hybrid batch use of common Big Data tools: hadoop fs load Hadoop HDFS Hive run Map/Reduce with an SQL like table definition and query 23

Processing z/os syslog data with Hive z/os OpenSSH server logs authorization activity in a syslog Unix System Services file: /var/log/auth.log Included in these messages are records of failed password authorization attempts for a userid: Failed password for invalid user <userid> We wish to analyze this data to determine which userids are most commonly associated with failed password attempts 24

Processing z/os syslog data with Hive //COZUSERH JOB (), COZ',MSGCLASS=H,NOTIFY=&SYSUID //RUNCOZ EXEC PROC=COZPROC,ARGS='-LI user@linux' //COZCFG DD * saf-cert=ssh-ring:rsa-cert ssh-tunnel=false //HIVEIN DD DISP=SHR,DSN=COZUSER.HIVE.SCRIPTS(SYSLOG) //STDIN DD * fromfile /var/log/auth.log hadoop fs -put - /logs/auth.log hive -f <(fromdsn DD:HIVEIN) z/os linux /var/log/auth.log fromfile hadoop fs -put HDFS: /logs/auth.log Copyright 2014 Dovetailed Technologies, LLC 26

Processing z/os syslog data with Hive //COZUSERH JOB (), COZ',MSGCLASS=H,NOTIFY=&SYSUID //RUNCOZ EXEC PROC=COZPROC,ARGS='-LI user@linux' //COZCFG DD * saf-cert=ssh-ring:rsa-cert ssh-tunnel=false //HIVEIN DD DISP=SHR,DSN=COZUSER.HIVE.SCRIPTS(SYSLOG) //STDIN DD * fromfile /var/log/auth.log hadoop fs -put - /logs/auth.log hive -f <(fromdsn DD:HIVEIN) z/os linux hive f <(fromdsn ) DD:HIVEIN CREATE TABLE Copyright 2014 Dovetailed Technologies, LLC 27

//HIVEIN DD DISP=SHR,DSN=COZUSER.HIVE.SCRIPTS(SYSLOG) CREATE TABLE IF NOT EXISTS syslogdata ( month STRING, day STRING, time STRING, host STRING, event STRING, msg STRING) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.regexserde' WITH SERDEPROPERTIES ("input.regex" = "(\\w+)\\s+(\\d+)\\s+(\\d+:\\d+:\\d+)\\s+(\\w+\\w*\\w*)\\s+(.*?\\:)\\s+(.*$) ) STORED AS TEXTFILE LOCATION '/logs'; HDFS: /logs Oct 13 21:12:22 S0W1 sshd[65575]: Failed password for invalid user root Oct 13 21:12:21 S0W1 sshd[65575]: subsystem request for sftp Oct 13 21:12:22 S0W1 sshd[65575]: Failed password for invalid user nagios Oct 13 21:12:21 S0W1 sshd[65575]: Accepted publickey for goetze Oct 13 21:12:22 S0W1 sshd[65575]: Port of Entry information retained for Copyright 2014 Dovetailed Technologies, LLC 29

//HIVEIN DD DISP=SHR,DSN=COZUSER.HIVE.SCRIPTS(SYSLOG) CREATE TABLE IF NOT EXISTS syslogdata ( month STRING, day STRING, time STRING, host STRING, event STRING, msg STRING) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.regexserde' WITH SERDEPROPERTIES ("input.regex" = "(\\w+)\\s+(\\d+)\\s+(\\d+:\\d+:\\d+)\\s+(\\w+\\w*\\w*)\\s+(.*?\\:)\\s+(.*$) ) STORED AS TEXTFILE LOCATION '/logs'; SELECT split(msg, ' ')[5] username, count(*) num FROM syslogdata WHERE msg LIKE 'Failed password for invalid user%' GROUP BY split(msg, ' ')[5] ORDER BY num desc,username; Failed password for invalid user root... Failed password for invalid user nagios...... Copyright 2014 Dovetailed Technologies, LLC 30

Hive Log Output By default, Hive writes its log to the stderr file descriptor on the target system Co:Z automatically redirects back to the job spool DD:STDERR Time taken: 4.283 seconds Total MapReduce jobs = 2 Launching Job 1 out of 2 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 2014-04-24 08:33:55,847 Stage-1 map = 0%, reduce = 0% 2014-04-24 08:36:49,447 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 6.89 sec 31

Hive Query Output By default, Hive writes its output to the stdout file descriptor on the target system Co:Z automatically redirects back to the job spool DD:STDOUT root 68215 admin 1511 www 315 nagios 240 test 226 oracle 191 Easily expands to process large numbers/types of log files incrementally stored in HDFS 32

Processing z/os DB2 data with RHadoop z/os DB2 High Performance Unload (HPU) Provides (among other things) rapid unload of table spaces Table space data can be accessed from target system with Co:Z cozclient dataset pipes command inzutilb HPU wrapper Enable data in flight from z/os DB2 to Big Data environments R and Hadoop have a natural affinity RHadoop developed by RevolutionAnalytics Apache 2 License Packages include rmr, rhdfs, rhbase 33

Processing z/os DB2 data with RHadoop z/os DB2 table DOVET.CLICKS contains information about each visitor to a website: Timestamp IP Address URL ID City Country State We want to analyze this data using R to predict the likelihood of next day visits by country 34

Processing z/os DB2 data with RHadoop //CZUSERR JOB (), COZ',MSGCLASS=H,NOTIFY=&SYSUID,CLASS=A //RUNCOZ EXEC PROC=COZPROC,ARGS= u@linux //COZCFG DD * saf-cert=ssh-ring:rsa-cert ssh-tunnel=false //STDIN DD * hadoop fs -rmr /user/rhadoop hadoop fs -mkdir /user/rhadoop/in hadoop fs -mkdir /user/rhadoop/out fromdsn //DD:HPUIN cozclient -ib inzutilb.sh 'DBAG,HPU' hadoop fs -put - /user/rhadoop/in/clicks.csv Rscript <(fromdsn DD:RSCRIPT) hadoop fs -cat /user/rhadoop/out/* todsn DD:RRESULT /* //RSCRIPT DD DISP=SHR,DSN=COZUSER.RHADOOP(CLICKS) //RRESULT DD SYSOUT=* //HPUIN DD * 35

Dataset Pipes cozclient command and INZUTILB The cozclient command can be used by the target script to run a z/os Unix System Services command Output is piped back the target script fromdsn //DD:HPUIN cozclient -ib inzutilb.sh 'DBAG,HPU' cozclient reads its input from stdin (piped from DD:/HPUIN) inzutilb.sh is a wrapper for the DB2 HPU utility (INZUTILB) Runs authorized on z/os Dynamically allocates HPU DDs SYSIN : stdin SYSREC1 : stdout SYSPRINT : stderr 36

DB2 HPU fromdsn //DD:HPUIN cozclient -ib inzutilb.sh 'DBAG,HPU' hadoop fs -put - /user/rhadoop/in/clicks.csv //HPUIN DD * UNLOAD TABLESPACE DB2 FORCE LOCK NO SELECT COUNTRY,TS,COUNT(*) FROM DOVET.CLICKS GROUP BY COUNTRY,TS OUTDDN SYSREC1 FORMAT DELIMITED SEP ',' DELIM '"' EBCDIC ts ip url swid city country state 2014 99.122 http://acme.com {7A homestead usa fl 2014 203.19 http://acme.com {6E perth aus wa 2014 67.230 http://acme.com {92 guaynabo pri na HPU aus",2014-03-01, 2 aus",2014-03-03, 27 Copyright 2014 Dovetailed Technologies, LLC 38

DB2 HPU fromdsn //DD:HPUIN cozclient -ib inzutilb.sh 'DBAG,HPU' hadoop fs -put - /user/rhadoop/in/clicks.csv //HPUIN DD * UNLOAD TABLESPACE DB2 FORCE LOCK NO SELECT COUNTRY,TS,COUNT(*) FROM DOVET.CLICKS GROUP BY COUNTRY,TS OUTDDN SYSREC1 FORMAT DELIMITED SEP ',' DELIM '"' EBCDIC z/os aus",2014-03-01, 2 aus",2014-03-03, 27 piped from z/os linux /user/hadoop/in/clicks.csv Copyright 2014 Dovetailed Technologies, LLC 39

DB2 HPU Status - DD:COZLOG CoZLauncher[N]: version: 2.4.4 2014-03-18 cozagent[n]: version: 1.1.2 2013-03-19 fromdsn(dd:stdin)[n]: 8 records/640 bytes read; 299 bytes written fromdsn(dd:hpuin)[n]: 7 records/560 bytes read; 172 bytes written 1INZU224I IBM DB2 HIGH PERFORMANCE UNLOAD V4.1 INZU219I PTFLEVEL=PM98396-Z499 INZI175I PROCESSING SYSIN AS EBCDIC. ----+----1----+----2----+----3----+----4----+----5----+--- 000001 UNLOAD TABLESPACE 000002 DB2 FORCE 000003 LOCK NO 000004 SELECT COUNTRY, TS, COUNT(*) FROM DOVETAIL.CLICKS GROUP BY COUNTRY, TS 000005 OUTDDN SYSREC1 000006 FORMAT DELIMITED SEP ',' DELIM '"' 000007 EBCDIC INZI020I DB2 SUB SYSTEM... DB2 VERSION DBAG DATASHARING GROUP DBAG 1010 NFM 40

RHadoop //CZUSERR JOB (), COZ',MSGCLASS=H,NOTIFY=&SYSUID,CLASS=A //RUNCOZ EXEC PROC=COZPROC,ARGS= u@linux //STDIN DD * Rscript <(fromdsn DD:RSCRIPT) //RSCRIPT DD DISP=SHR,DSN=COZUSER.RHADOOP(CLICKS) /user/rhadoop/in map reduce (predict) /user/rhadoop/out Copyright 2014 Dovetailed Technologies, LLC 41

DD:RSCRIPT - Mapper #Modified from Hortonworks example library(rmr2) insertrow <- function(target.dataframe, new.day) { } new.row <- c(new.day, 0) target.dataframe <- rbind(target.dataframe,new.row) target.dataframe <- target.dataframe[order(c(1:(nrow(target.dataframe)-1), new.day-0.5)),] row.names(target.dataframe) <- 1:nrow(target.dataframe) return(target.dataframe) mapper = function(null, line) { } keyval(line[[1]], paste(line[[1]],line[[2]],line[[3]],sep=",")) 42

DD:RSCRIPT - Reducer reducer = function(key, val.list) { if( length(val.list) < 10 ) return() list <- list() country <- unlist(strsplit(val.list[[1]], ","))[[1]] for(line in val.list) { l <- unlist(strsplit(line, split=",")) x <- list(as.posixlt(as.date(l[[2]]))$mday, l[[3]]) list[[length(list)+1]] <- x } list <- lapply(list, as.numeric) frame <- do.call(rbind, list) colnames(frame) <- c("day","clickscount") i = 1 while(i < 16) { if(i <= nrow(frame)) curday <- frame[i, "day"] if( curday!= i ) frame <- insertrow(frame, i) i <- i+1 } model <- lm(clickscount ~ day, data=as.data.frame(frame)) p <- predict(model, data.frame(day=16)) keyval(country, p) } 43

DD:RSCRIPT - mapreduce mapreduce( input="/user/rhadoop/in", input.format=make.input.format("csv", sep = ","), output="/user/rhadoop/out", output.format="csv", map=mapper, reduce=reducer ) 44

DB2 Rhadoop Status - DD:STDERR 45 14/04/23 13:39:45 INFO mapreduce.job: map 100% reduce 100% 14/04/23 13:39:46 INFO mapreduce.job: Job job_1397667423931_0064 completed successfully 14/04/23 13:39:46 INFO mapreduce.job: Counters: 44 File System Counters FILE: Number of bytes read=17168... Job Counters Launched map tasks=2... Map-Reduce Framework Map input records=79... Shuffle Errors rmr BAD_ID=0... reduce calls=21 14/04/23 13:39:46 INFO streaming.streamjob: Output directory: /user/rhadoop/out

Processing z/os DB2 data with RHadoop //CZUSERR JOB (), COZ',MSGCLASS=H,NOTIFY=&SYSUID,CLASS=A //RUNCOZ EXEC PROC=COZPROC,ARGS= u@linux hadoop fs -rmr /user/rhadoop hadoop fs -mkdir /user/rhadoop/in hadoop fs -mkdir /user/rhadoop/out fromdsn //DD:HPUIN cozclient -ib inzutilb.sh 'DBAG,HPU' hadoop fs -put - /user/rhadoop/in/clicks.csv Rscript <(fromdsn DD:RSCRIPT) hadoop fs -cat /user/rhadoop/out/* todsn DD:RRESULT //RRESULT DD SYSOUT=* "usa" "36323.3142857143" "pri" "170.956093189964" 46

Processing z/os DB2 data with RHadoop Hybrid Batch Principles revisited: 1. R analysis executed on a virtual server from a z/os batch job step 2. Uses existing programs Rscript,hadoop fs 3. Output is redirected to z/os spool 4. DB2 HPU data easily accessed via cozclient 5. The script exit code is adopted as the z/os job step CC Big Data Opportunities: Incremental growth in Hadoop zbx/puredata systems are relatively inexpensive All processing stays within the z/os security envelope Facilitates R analysis of DB2 data over time Opens up new analysis insights without affecting production systems 47

Summary zenterprise / z/os / Linux Provides hybrid computing environment Co:Z Launcher and Target System Toolkit Provides framework for hybrid batch processing Co:Z Hybrid Batch enables BigData with z/os High speed data movement SAF security dictates access to z/os resources and can be used to control access to target (BigData) systems z/os retains operational control Website: http://dovetail.com Email: info@dovetail.com 48