IMS Data Integration with Hadoop



Similar documents
BIG DATA TRENDS AND TECHNOLOGIES

Exploiting Data at Rest and Data in Motion with a Big Data Platform

What s Happening to the Mainframe? Mobile? Social? Cloud? Big Data?

IBM Big Data Platform

HADOOP AND MAINFRAMES CRAZY OR CRAZY LIKE A FOX? Mike Combs, VP of Marketing mcombs@veristorm.com

Big Data Strategies with IMS

What s Happening to the Mainframe? Mobile? Social? Cloud? Big Data?

HDP Hadoop From concept to deployment.

Luncheon Webinar Series May 13, 2013

The Future of Data Management

Data processing goes big

Oracle s Big Data solutions. Roger Wullschleger. <Insert Picture Here>

IBM InfoSphere BigInsights Enterprise Edition

Native Connectivity to Big Data Sources in MSTR 10

Capitalize on Big Data for Competitive Advantage with Bedrock TM, an integrated Management Platform for Hadoop Data Lakes

ESS event: Big Data in Official Statistics. Antonino Virgillito, Istat

<Insert Picture Here> Big Data

Simplifying Mainframe Data Access with IBM InfoSphere System z Connector for Hadoop IBM Redbooks Solution Guide Did you know?

Oracle Big Data SQL Technical Update

Data Lake In Action: Real-time, Closed Looped Analytics On Hadoop

Architecting for Big Data Analytics and Beyond: A New Framework for Business Intelligence and Data Warehousing

Tapping Into Hadoop and NoSQL Data Sources with MicroStrategy. Presented by: Jeffrey Zhang and Trishla Maru

#mstrworld. Tapping into Hadoop and NoSQL Data Sources in MicroStrategy. Presented by: Trishla Maru. #mstrworld

Tap into Hadoop and Other No SQL Sources

IBM Big Data Platform

Lambda Architecture. Near Real-Time Big Data Analytics Using Hadoop. January Website:

Intel HPC Distribution for Apache Hadoop* Software including Intel Enterprise Edition for Lustre* Software. SC13, November, 2013

Big Data Analytics Nokia

Workshop on Hadoop with Big Data

A Tour of the Zoo the Hadoop Ecosystem Prafulla Wani

INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE

Evolution from Big Data to Smart Data

Big Data, Cloud Computing, Spatial Databases Steven Hagan Vice President Server Technologies

Modernizing Your Data Warehouse for Hadoop

Improve your IT Analytics Capabilities through Mainframe Consolidation and Simplification

GAIN BETTER INSIGHT FROM BIG DATA USING JBOSS DATA VIRTUALIZATION

WHAT S NEW IN SAS 9.4

Managing Big Data with Hadoop & Vertica. A look at integration between the Cloudera distribution for Hadoop and the Vertica Analytic Database

Big Data Management and Security

Chukwa, Hadoop subproject, 37, 131 Cloud enabled big data, 4 Codd s 12 rules, 1 Column-oriented databases, 18, 52 Compression pattern, 83 84

Hadoop IST 734 SS CHUNG

End to End Solution to Accelerate Data Warehouse Optimization. Franco Flore Alliance Sales Director - APJ

Comprehensive Analytics on the Hortonworks Data Platform

Apache Hadoop: The Big Data Refinery

IBM InfoSphere Solutions for System z

Chapter 7. Using Hadoop Cluster and MapReduce

Hadoop Big Data for Processing Data and Performing Workload

Infomatics. Big-Data and Hadoop Developer Training with Oracle WDP

Native Connectivity to Big Data Sources in MicroStrategy 10. Presented by: Raja Ganapathy

IBM BigInsights for Apache Hadoop

Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data

The Future of Data Management with Hadoop and the Enterprise Data Hub

IBM BigInsights Has Potential If It Lives Up To Its Promise. InfoSphere BigInsights A Closer Look

Data Integration Checklist

Virtualizing Apache Hadoop. June, 2012

Lecture 32 Big Data. 1. Big Data problem 2. Why the excitement about big data 3. What is MapReduce 4. What is Hadoop 5. Get started with Hadoop

Big Data and Trusted Information

Hadoop Ecosystem B Y R A H I M A.

Architectural patterns for building real time applications with Apache HBase. Andrew Purtell Committer and PMC, Apache HBase

Hadoop & Spark Using Amazon EMR

Big Data Realities Hadoop in the Enterprise Architecture

Big Data Storage Challenges for the Industrial Internet of Things

IBM AND NEXT GENERATION ARCHITECTURE FOR BIG DATA & ANALYTICS!

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh

Upcoming Announcements

Hadoop and Map-Reduce. Swati Gore

IBM Software InfoSphere Guardium. Planning a data security and auditing deployment for Hadoop

IBM InfoSphere Guardium Data Activity Monitor for Hadoop-based systems

Apache Hadoop: Past, Present, and Future

Data Refinery with Big Data Aspects

z/os Data Replication as a Driver for Business Continuity

Are You Ready for Big Data?

I/O Considerations in Big Data Analytics

Using Attunity Replicate with Greenplum Database Using Attunity Replicate for data migration and Change Data Capture to the Greenplum Database

BIG DATA TECHNOLOGY. Hadoop Ecosystem

Accelerate Data Loading for Big Data Analytics Attunity Click-2-Load for HP Vertica

How the oil and gas industry can gain value from Big Data?

BIG DATA What it is and how to use?

Implement Hadoop jobs to extract business value from large and varied data sets

Talend Real-Time Big Data Sandbox. Big Data Insights Cookbook

Survival Tips for Big Data Impact on Performance Share Pittsburgh Session 15404

Manifest for Big Data Pig, Hive & Jaql

BIG DATA: FROM HYPE TO REALITY. Leandro Ruiz Presales Partner for C&LA Teradata

Hadoop implementation of MapReduce computational model. Ján Vaňo

Real Time Big Data Processing

Unified Batch & Stream Processing Platform

Executive Summary... 2 Introduction Defining Big Data The Importance of Big Data... 4 Building a Big Data Platform...

Offload Enterprise Data Warehouse (EDW) to Big Data Lake. Ample White Paper

Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related

Beyond Web Application Log Analysis using Apache TM Hadoop. A Whitepaper by Orzota, Inc.

Hadoop Submitted in partial fulfillment of the requirement for the award of degree of Bachelor of Technology in Computer Science

MySQL and Hadoop. Percona Live 2014 Chris Schneider

How To Handle Big Data With A Data Scientist

Large scale processing using Hadoop. Ján Vaňo

Oracle Database 12c Plug In. Switch On. Get SMART.

Deploying Hadoop with Manager

Il mondo dei DB Cambia : Tecnologie e opportunita`

Internals of Hadoop Application Framework and Distributed File System

Chase Wu New Jersey Ins0tute of Technology

Transcription:

Data Integration with Hadoop Karen Durward InfoSphere Product Manager 17/03/2015 * Technical Symposium 2015

z/os Structured Data Integration for Big Data The Big Data Landscape Introduction to Hadoop What, Why, How The community cares about Hadoop because Exponential value at the intersection of Hadoop and Structured Data z/os Data Integration with Hadoop The Requirements and The Journey Hadoop on z? z/os data integration InfoSphere System z Connector for Hadoop Fast, Easy, Low Investment z/os data delivery to HDFS and Hive InfoSphere Data Replication Keeping your Big Data Up-to-Date 2

Hadoop Created to Go Beyond Traditional Database Technologies Foundation for Exploratory Analytics Hadoop was pioneered by Google and Yahoo! to address issues they were having with then-available database technology: Data volumes could not be cost effectively managed using database technologies Analyzing larger volumes of data can provide better results than sampling smaller amounts Insights needed to be mined from unstructured data types Data was being explored to understand its potential value to the business Typical Use Cases Analyze a Variety of Information Novel analytics on a broad set of mixed information that could not be analyzed before Analyze Extreme Volumes of Information Cost-efficiently process and analyze petabytes of information Discovery and Experimentation Quick and easy sandbox to explore data and determine its value 3 3

What is Hadoop? Divide and Conquer! Rapidly evolving open source software framework for creating and using hardware clusters to process vast amounts of data! 2.# version framework consists of: Common Core: the basic modules (libraries and utilities) on which all components are built Hadoop Distributed File System (HDFS): manage data stored on multiple machines for very high aggregate bandwidth across the "cluster" of machines MapReduce: programming model to support the high data volume processing of data in the cluster "Yet Another Resource Negotiator" (YARN): platform to manage the cluster's compute resources, de-coupling Hadoop workload and resource management 4

Typical Hadoop Data Flow 1. Load Data into an HDFS cluster Optimize for parallelism and reliability Break the source into large blocks (typically 64MB) so that each block can be written (and read) independently Each block is written to a node the more nodes the more dispersion of the data for future parallel processing Redundant copies of each block are maintained on separate nodes to protect against hardware failures Data Node Source File... Client Node Data Node 5 Redundancy for reliability Configurable, three is the typical default Simplistically, the more unreliable the environment, the higher the degree of redundancy required Name (Meta Data) Node Data Node. Data Node

Typical Hadoop Data Flow 1. Load Data into an HDFS cluster 2. Analyze the data in the cluster using a MapReduce "program" Map: Drive analysis of the data Reduce: Construct a result Writing it back into the cluster 3. Use the result! C M Client Master split 0 Map Reduce output 0 split 1 split 2 split 3 Map Reduce output 1 split 4 split 5 Map Reduce output 2 Input Files Map Phase Intermediate Files Reduce Phase Output Files 6

Hadoop Cluster Configuration Implications Redundancy drives Network Traffic With three-way redundancy, each terabyte of data results in three terabytes of network traffic Parallelism drives Performance Scale OUT (more nodes) and/or files OUT (more blocks) Spread blocks of data across more nodes so more blocks can be read in parallel Can spread a file to more nodes if you have enough nodes Network activity spread across many nodes/files Scale UP (more CPUs and/or memory per node rather than more nodes) Increases the density of each node More network activity concentrated on each node 7

The Landscape is Rapidly Evolving More Apache Frameworks and Products You'll Hear About HiveTM Spark Apache data warehouse framework accessible using HiveQL In-memory framework providing an alternative to MapReduce HBaseTM Apache Hadoop database Pig High level platform for creating long and deep Hadoop source programs Zookeeper Infrastructure & services enabling synchronization across large clusters Flume Oozie Collect and integrate data into Hadoop coordinating Web/app services, Workflow processing connecting multiple types of Hadoop source jobs 8

z/os Structured Data Integration for Big Data The Big Data Landscape Introduction to Hadoop What, Why, How The community cares about Hadoop because Exponential value at the intersection of Hadoop and Structured Data Starting on the z/os data integration with Hadoop journey InfoSphere System z Connector for Hadoop Fast, Easy, Low Investment z/os data delivery to HDFS and Hive Keeping Big Data Current InfoSphere Data Replication Continuous incremental updates 9

Imagine the possibility of leveraging all of your information assets Precise fraud & risk detection Understand and act on customer sentiment Accurate and timely threat detection Predict and act on intent to purchase Low-latency network analysis Traditional Approach Structured, analytical, logical Data: Rich, historical, private, structured. Customers, history, transactions Transaction Data Internal App Data Mainframe Data Here s a question. What s the answer? OLTP System Data ERP Data Data Warehouse Structured Repeatable Linear Traditional Sources Value in New ideas questions answers Hadoop Streams Unstructured Exploratory Dynamic New Sources Multimedia Web Logs New Approach Creative, holistic thought, intuition Social Data Text Data: emails Sensor data: images RFID Transformational benefit comes from integrating "new" data and methods with traditional ones! Data: Intimate, unstructured. Social, mobile, GPS, web, photos, video, email, logs Here s some data. What does it tell me? 10

This is Driving the Shifting Sands of Enterprise IT New ways of thinking for transformative economics Traditional Approach Vertical Infrastructure Design schemas in advance What data should I keep? What reports do I need? ETL, down-sample, aggregate On-premise New Approach Distributed data grids Evolve schemas on-the-fly Keep everything, just in case Test theories, model on-the-fly Knowledge from raw-data On-premise, cloud, hybrid 11 11

Transaction & Log Data Dominate Big Data Deployments Very High Proportion of this Data Resides on System z N=465 (multiple responses allowed) Source: Gartner (September, 2013) Gartner research note Survey Analysis - Big Data Adoption in 2013 Shows Substance Behind the Hype Sept 12 2013 Analyst(s): Lisa Kart, Nick Heudecker, Frank Buytendijk 12

Hadoop and System z The Requirements and The Journey Technical Symposium 2015

Hadoop Topology Choices for System z Data Processing done outside z (Extract and move data) Processing done on z (Hadoop cluster on zlinux) Extract and consume (z controls the process) System z data Move data over the network System z data Ingest System z data MR jobs, result Petabytes possible Additional infrastructure. Challenges with: - scale - governance - ingestion. Data is outside System z control. Gigabytes to Terabytes reasonable Rapidly provision new node(s) Near linear scale. System z is the control point. External data NOT routed through System z. System z governance for the result set. System z is the control point. 14

The most likely driver for Hadoop on System z Value of exploratory analytic models applied to System z data Risk associated with numerous copies of sensitive data dispersed across commodity servers with broad access 15

Hadoop on System z What makes sense when? DB2 QSAM SMF z/os Case 1: Hadoop on the Mainframe Logs CP(s) VSAM RMF Linux Linux Linux Linux Linux z/vm IFL IFL IFL Data originates mostly on the mainframe (Log files, database extracts) and data security is important Z governance & security models needed Network volume or security concerns Moderate volumes 100 GB to 10s of TBs Hadoop value from rich exploratory analytics (Hybrid Transaction-Analytic appliances for traditional analytics) 16 16 DB2 VSAM QSAM SMF RMF Logs z/os CP(s) Case 2: Hadoop off the Mainframe Linux Linux Linux Linux Linux Most data originates off of the mainframe Security less of a concern since data is not "trusted" anyway Very large data sets 100s of TB to PBs Hadoop is valued for ability to economically manage large datasets Desire to leverage lowest cost processing and potentially cloud elasticity

IBM InfoSphere BigInsights Uniquely Offers Multiple technology options & deployment models Intel Servers IBM Power IBM System z On Cloud 17

IBM's InfoSphere z/os Data Integration with Hadoop From the Sandbox to the Enterprise Linux (IFLs) z/os Linux x-based System or P Power Linux System x InfoSphere System z Connector for Hadoop Linux for System z Simplicity, Speed and Security of the z Connector load to deliver a sandbox and jump start all of your Big Data projects InfoSphere System z Connector for Hadoop Linux on Power, X 18 18 VSAM QSAM SMF, RMF

IBM's InfoSphere z/os Data Integration with Hadoop From the Sandbox to the Enterprise Linux (IFLs) z/os Linux x-based System or P Power Linux System x InfoSphere System z Connector for Hadoop Linux for System z Optimized, Change- Only, Real-Time data replication keeping your Big Data current with lower operational cost InfoSphere Data Replication InfoSphere Classic CDC DB2 CICS Logs InfoSphere System z Connector for Hadoop Linux on Power, X 19 19 VSAM QSAM SMF, RMF

IBM's InfoSphere z/os Data Integration with Hadoop From the Sandbox to the Enterprise InfoSphere System z Connector for Hadoop Linux for System z Linux (IFLs) InfoSphere Information Server for System z (DataStage) z/os Transformation Rich Data Integration and Governance bridging traditional data warehousing with Big Data for the Enterprise InfoSphere Data Replication InfoSphere Classic CDC DB2 CICS Logs Linux x-based System or P Power Linux System x InfoSphere Information Server (DataStage) InfoSphere System z Connector for Hadoop Linux on Power, X 20 20 VSAM QSAM SMF, RMF

z/os Data Integration with Hadoop The InfoSphere System z Connector for Hadoop Technical Symposium 2015

IBM InfoSphere System z Connector for Hadoop Setup in Hours, Generated Basic Transforms, Interactive or Scheduled EXTRACT No SQL, No COBOL MOVE & CONVERT Automatic Transforms ANALYZE Leverage Hadoop Databases DB2 for z/os VSAM Files Sequential Files Syslog Operlog SMF/RMF Hipersocket OSA Insert into HDFS z/os Linux (System z, Power or System x) 22 Click and Copy

IBM InfoSphere System z Connector for Hadoop System z Mainframe z/os S M F DB2 VSAM Logs System z CP(s) Connector For Hadoop System z Connector For Hadoop Linux for System z MapReduce, Hbase, Hive HDFS z/vm IFL IFL IFL Leverage z data with Hadoop on your platform of choice IBM System z for security Power Systems Intel Servers Point and click or batch selfservice data access Lower cost processing & storage 23

IBM InfoSphere System z Connector for Hadoop Key Product Features Supports multiple Hadoop distributions on and off the mainframe Multiple source formats:, DB2, VSAM/QSAM, Log files HiperSockets and 10 GbE Data Transfer Drag-and-drop interface no programming required Multiple destinations: Hadoop, Linux File Systems, Streaming endpoints Define multiple data transfer profiles Streaming interface filter and translate data & columns on the fly, on the target Secure pipe RACF integration Preserves metadata, landing mainframe data in Hive tables 24

IBM InfoSphere System z Connector for Hadoop Architectural view z/os Linux Server (zvm guest) or distributed Linux (Intel or Power) DB2 Data Platform Data Analysis VSAM USS components zconnector "Connect" (SSL) vhub Data Plugin Tomcat Servlet Data Visualization Logs Client Applet 25

IBM InfoSphere System z Connector for Hadoop Architectural view z/os Linux Server (zvm guest) or distributed Linux (Intel or Power) DB2 Data Platform Data Analysis VSAM USS components zconnector "Connect" (SSL) USS Components vhub Data Plugin Tomcat Servlet Data Visualization 26 Logs Trigger DB2 unload or high performance unload, streaming binary result to the target no z/os DASD involved Read other sources (e.g. uses 's JDBC interface) streaming binary results to the target Note: Log file transfers pick up from the date/time stamp of the previous transfer automating target updates Client Applet

IBM InfoSphere System z Connector for Hadoop Architectural view z/os Linux Server (zvm guest) or distributed Linux (Intel or Power) DB2 Data Platform Data Analysis VSAM USS components zconnector "Connect "(SSL) vhub Data Plugin Tomcat Servlet Data Visualization Logs Linux Components Apply row-level filtering Reformat the data Write to the target HDFS/Hive Write meta data in Hive Client Applet 27 Demo Screens

InfoSphere System z Connector for Hadoop Summary A secure pipe for data RACF integration standard credentials Data streamed over secure channel using hardware crypto A Rapid Deployment Integrating z/os data in a few hours Easy to use ingestion engine Light-weight; no programming required Native data collectors accessed via a graphical user interface Wide variety of data sources supported Conversions handled automatically Streaming technology does not load z/os engines nor require DASD for staging Best Use Cases HDFS/Hive sandbox for initial deployments explore your data Easy to setup Hours, not Days! Operational Analytics using z/os log data (SMF, RMF, ) Exploring operational data using Hadoop on day one! Moderate volumes (100s of GBs to 10s of TBs) of transactional data Source of the data is z/os Security may be a primary concern 28

z/os Data Integration with Hadoop Keeping your Hadoop Data Current Technical Symposium 2015

IBM s InfoSphere Data Replication (IIDR) Coverage DB2 z/os DB2 z/os VSAM VSAM DB2 (z/os, i, LUW) Informix Oracle MS SQL Server Sybase PD for A (Netezza) Teradata Information Server HDFS/Hive Message Queues Files FlexRep (JDBC targets) Customized Apply DB2 (i, LUW) Informix Oracle MS SQL Server Sybase ESB, Cognos Now, MySQL, GreenPlum, 30 30

to Hadoop Read logs - Send committed changes - Apply changes Source DBs ACBLIB Classic Data Architect Replication Metadata WHAT: logging Management ACTION: replication logs developed Consolefor v10 and higher specifically to support to and to non- data replication Access Server Classic Server Admin. Services Admin Agent Logs Log Read/ Merge UOR Capture TCP/IP Admin API Comm Layer Target Engine Apply Agent MetaData 31 RECON SOURCE SERVER IBM InfoSphere Classic CDC TARGET IBM InfoSphere Data Replication

to Non- Data Replication Read logs - Send committed changes - Apply changes Source DBs Logs ACBLIB Classic Data Architect Classic Server Log Read/ Merge Replication Metadata Admin. Services UOR Capture Management WHAT: Log Reader ACTION: Console log reader capable of capturing changes from BOTH local and remote logs. It ensures proper sequencing of committed changes for a single Access local instance or for multiple logs in an Server PLEX IMPACT: One centralized log reader instance replaces the multiple instances required by Classic DEP v9.5, simplifying deployment and reducing overhead. TCP/IP Admin Agent Admin API Comm Layer Target Engine Apply Agent MetaData 32 RECON SOURCE SERVER IBM InfoSphere Classic CDC TARGET IBM InfoSphere Data Replication

to Non- Data Replication Read logs - Send committed changes - Apply changes Classic Data Architect Management Console Source DBs ACBLIB Replication Metadata Access Server 33 Logs RECON Classic Server Log Read/ Merge Admin. Services UOR Capture TCP/IP Admin Agent Admin API Comm Layer MetaData WHAT: Capture Engine Target Engine Apply Agent ACTION: Maintain conversations with the Target Servers SOURCE SERVER Manage transaction boundaries TARGET Buffer multiple changes for performance IBM InfoSphere Classic CDC IBM InfoSphere Data Replication

InfoSphere Data Replication for for z/os Details of Source Capture Databases TM / DB* Start Notification Exit Routine WHAT: Classic routine associated with s Partner Program Exit. ACTION: Notify Classic Server when an system starts. Logs TCP/IP Notification Replication Metadata 34 BATCH DL/I Batch Start-Stop Exit Routine TCP/IP Notification WHAT: Classic routine associated with s Log Exit. ACTION: Notify Classic Server Log Info when a Batch DL/I job starts or stops. DBRC API RECON Log Reader Service SOURCE SERVER Change Stream Ordering Capture Services * includes BMP and DBCTL

to Non- Data Replication Read logs - Send committed changes - Apply changes Source DBs ACBLIB Classic Data Architect Replication Metadata WHAT: Classic Server Management ACTION: Latest version of Classic Server used by all IBM InfoSphere Console Classic products. Reformat data into a relational format Converse with IIDR target engines Access Read source data for full refresh Server IMPACT: Same handling of data as provided in Classic DEP Classic Server Admin. Services Admin Agent Logs Log Read/ Merge UOR Capture TCP/IP Admin API Comm Layer Target Engine Apply Agent MetaData 35 RECON SOURCE SERVER IBM InfoSphere Classic CDC TARGET IBM InfoSphere Data Replication (IIDR)

Source DBs to Non- Data Replication Read logs - Send committed changes - Apply changes WHAT: IIDR Target Server ACTION: Apply changes to the target(s) while maintaining restart information in a local bookmark Classic for recovery Data purposes Management Additional target-based Architect transformations can be Console applied Integration with many other InfoSphere solutions Available for z/os, Linux on System z, Linux, Unix, and Windows IMPACT: One target server for tens of targets regardless of the source. ACBLIB Replication Metadata Access Server Classic Server Admin. Services Admin Agent Logs Log Read/ Merge UOR Capture TCP/IP Admin API Comm Layer Target Engine Apply Agent MetaData 36 RECON SOURCE SERVER IBM InfoSphere Classic CDC TARGET IBM InfoSphere Data Replication (IIDR)

Thank you! 37

IBM InfoSphere System z Connector for Hadoop Point and click access to z/os resident data sources with automated transfer to IBM BigInsights and third party Hadoop clusters. 38

IBM InfoSphere System z Connector for Hadoop Extensive browserbased help is built-in 39

IBM InfoSphere System z Connector for Hadoop Multiple users can be provided with access to the web-based System z Connector data transfer tool 40

IBM InfoSphere System z Connector for Hadoop Directly access data on Mainframe DASD 41

IBM InfoSphere System z Connector for Hadoop Directly access System z log files including SMF, RMF, Syslog and the Operator logs 42

IBM InfoSphere System z Connector for Hadoop A step-by-step wizard guides users through the process of setting up connections to z/os data sources (DB2 shown here) 43

IBM InfoSphere System z Connector for Hadoop A similar wizard is used to configure Hadoop-based targets 44

IBM InfoSphere System z Connector for Hadoop JCL parameters are entered through the web-based GUI 45

IBM InfoSphere System z Connector for Hadoop Browse the contents of mainframe data sources from within the System z Connector for Hadoop interface 46

GO Back IBM InfoSphere System z Connector for Hadoop 47 Copy source data to HDFS, the Linux File System, HIVE or stream data directly to a receiving port on a Linux system or virtual machine in your chosen format

When it comes to realizing time to value: Commercial Framework Differences Matter Capabilities Software tooling to build higher quality, more maintainable applications quickly and cost efficiently Infrastructure Deploy on reliable, cost-efficient infrastructure that matches quality-of-service requirements 48