Setting up Hadoop with MongoDB on Windows 7 64-bit
|
|
- Magnus Warren
- 8 years ago
- Views:
Transcription
1 SGT WHITE PAPER Setting up Hadoop with MongoDB on Windows 7 64-bit HCCP Big Data Lab 2015 SGT, Inc. All Rights Reserved 7701 Greenbelt Road, Suite 400, Greenbelt, MD Tel: (301) Fax: (301)
2 Setting up Hadoop with MongoDB on Windows 7 64 bit Tools and Technologies used in this article: 1. Apache Hadoop Source codes 2. Windows 7 OS 3. Microsoft Windows SDK v Maven Protocol Buffers Cygwin 7. JDK MongoDB 9. Mongo-Hadoop Connector 10. Mongo Java Driver Build Hadoop bin distribution for Windows 1. Download and install Microsoft Windows SDK v Download and install JDK 1.6 (must be JDK, not JRE) 3. Download and install Unix command-line tool Cygwin. 4. Download and install Maven Unzip the distribution archive, i.e. apache-maven bin.zip to the directory you wish to install Maven These instructions assume you chose C:\Program Files\Apache Software Foundation. The subdirectory apache-maven will be created from the archive. 6. Add the M2_HOME environment variable by opening up the system properties (WinKey + Pause), selecting the "Advanced" tab, and the "Environment Variables" button, then adding the M2_HOME variable in the user variables with the value C:\Program Files\Apache Software Foundation\apache-maven Be sure to omit any quotation marks around the path even if it contains spaces. 7. In the same dialog, add the M2 environment variable in the user variables with the value %M2_HOME%\bin. 8. Optional: In the same dialog, add the MAVEN_OPTS environment variable in the user variables to specify JVM properties, e.g. the value -Xms256m -Xmx512m. This environment variable can be used to supply extra options to Maven. 9. In the same dialog, update/create the Path environment variable in the user variables and prepend the value %M2% to add Maven available in the command line. 10. In the same dialog, make sure that JAVA_HOME exists in your user variables or in the system variables and it is set to the location of your JDK, e.g. C:\Program Files\Java\jdk1.5.0_02 and that %JAVA_HOME%\bin is in your Path environment variable.
3 11. Open a new command prompt (Winkey + R then type cmd) and run mvn --version to verify that it is correctly installed. 5. Download Protocol Buffers and extract to a folder (say c:\protobuf). 6. Add Environment Variables JAVA_HOME, M2_HOME and Platform if not added already. Add Environment Variables: Note : 1. Variable name Platform is case sensitive. And value will be either x64 or Win32 for building on a 64-bit or 32-bit system. 2. If JDK installation path contains any space then use Windows shortened name (say 'PROGRA~1' for 'Program Files') for the JAVA_HOME environment variable. Edit Path Variable to add bin directory of Cygwin (say C:\cygwin64\bin), bin directory of Maven (say C:\maven\bin) and installation path of Protocol Buffers (say c:\protobuf). Edit Path Variable:
4 7. Download hadoop src.tar.gz and extract to a folder having short path (say c:\hdfs) to avoid runtime problem due to maximum path length limitation in Windows. To extract a tar file in Windows, open cygwin and cd to the directory that contains hadoop src.tar.gz. Enter tar xvzf hadoop src.tar.gz /cygdrive/c/hdfs. The file should now be extracted to C:\hdfs 8. A patch needs to be added to C:\hdfs\hadoop-common-project\hadoop-auth\pom.xml, open the file with any text editor and add the following highlighted section after line 57:
5 9. Select Start --> All Programs --> Microsoft Windows SDK v7.1 and open Windows SDK 7.1 Command Prompt. Change directory to Hadoop source code folder (c:\hdfs). Execute mvn package with options -Pdist,native-win -DskipTests -Dtar to create Windows binary tar distribution. Windows SDK 7.1 Command Prompt? Setting SDK environment relative to C:\Program Files\Microsoft SDKs\Windows\v7.1\. Targeting Windows 7 x64 Debug 5 C:\Program Files\Microsoft SDKs\Windows\v7.1>cd c:\hdfs 6 C:\hdfs>mvn package -Pdist,native-win -DskipTests -Dtar 7 [INFO] Scanning for projects... 8 [INFO] [INFO] Reactor Build Order: 11 [INFO] 12 [INFO] Apache Hadoop Main 13 [INFO] Apache Hadoop Project POM 14 [INFO] Apache Hadoop Annotations 15 [INFO] Apache Hadoop Assemblies 16 [INFO] Apache Hadoop Project Dist POM 17 [INFO] Apache Hadoop Maven Plugins 18 [INFO] Apache Hadoop Auth 19 [INFO] Apache Hadoop Auth Examples 20 [INFO] Apache Hadoop Common 21 [INFO] Apache Hadoop NFS 22 [INFO] Apache Hadoop Common Project Note : I have pasted only the starting few lines of huge logs generated by maven. This building step requires Internet connection as Maven will download all the required dependencies. 10. If everything goes well in the previous step, then native distribution hadoop tar.gz will be created inside C:\hdfs\hadoop-dist\target\hadoop directory.
6 Install Hadoop 1. Extract hadoop tar.gz to a folder (say c:\hadoop). 2. Add Environment Variable HADOOP_HOME and edit Path Variable to add bin directory of HADOOP_HOME (say C:\hadoop\bin). Add Environment Variables: Configure Hadoop Make following changes to configure Hadoop File: C:\hadoop\etc\hadoop\core-site.xml? <?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!--
7 4 5 6 Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. 15--> 16 17<!-- Put site-specific property overrides in this file. --> 18 19<configuration> <property> <name>fs.defaultfs</name> <value>hdfs://localhost:9000</value> </property> 24</configuration> fs.defaultfs: The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.scheme.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem. File: C:\hadoop\etc\hadoop\hdfs-site.xml? <?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License.
8 6 You may obtain a copy of the License at Unless required by applicable law or agreed to in writing, software 11 distributed under the License is distributed on an "AS IS" BASIS, 12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 See the License for the specific language governing permissions and 14 limitations under the License. See accompanying LICENSE file. 15--> 16 17<!-- Put site-specific property overrides in this file. --> 18 19<configuration> 20 <property> 21 <name>dfs.replication</name> 22 <value>1</value> 23 </property> 24 <property> 25 <name>dfs.namenode.name.dir</name> 26 <value>file:/hadoop/data/dfs/namenode</value> 27 </property> 28 <property> 29 <name>dfs.datanode.data.dir</name> 30 <value>file:/hadoop/data/dfs/datanode</value> 31 </property> 32</configuration> dfs.replication: Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. dfs.namenode.name.dir: Determines where on the local filesystem the DFS name node should store the name table(fsimage). If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy.
9 dfs.datanode.data.dir: Determines where on the local filesystem an DFS data node should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices. Directories that do not exist are ignored. Note : Create namenode and datanode directory under c:/hadoop/data/dfs/. File: C:\hadoop\etc\hadoop\yarn-site.xml? <?xml version="1.0"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. 14--> 15<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.shufflehandler</value> </property> <property> <name>yarn.application.classpath</name> <value>
10 %HADOOP_HOME%\etc\hadoop, %HADOOP_HOME%\share\hadoop\common\*, %HADOOP_HOME%\share\hadoop\common\lib\*, %HADOOP_HOME%\share\hadoop\mapreduce\*, %HADOOP_HOME%\share\hadoop\mapreduce\lib\*, %HADOOP_HOME%\share\hadoop\hdfs\*, %HADOOP_HOME%\share\hadoop\hdfs\lib\*, %HADOOP_HOME%\share\hadoop\yarn\*, %HADOOP_HOME%\share\hadoop\yarn\lib\* </value> </property> 38</configuration> yarn.nodemanager.aux-services: The auxiliary service name. Default value is omapreduce_shuffle yarn.nodemanager.aux-services.mapreduce.shuffle.class: The auxiliary service class to use. Default value is org.apache.hadoop.mapred.shufflehandler yarn.application.classpath: CLASSPATH for YARN applications. A comma-separated list of CLASSPATH entries. File: C:\hadoop\etc\hadoop\mapred-site.xml? <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 13 14 See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. 15--> 16 17<!-- Put site-specific property overrides in this file. --> 18 19<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> 24</configuration> mapreduce.framework.name: The runtime framework for executing MapReduce jobs. Can be one of local, classic or yarn. Format namenode Before you start Hadoop for the first time only, namenode needs to be formatted. Command Prompt? 1 2 Microsoft Windows [Version ] Copyright (c) 2009 Microsoft Corporation. All rights reserved. 3 4 C:\Users\abhijitg>cd c:\hadoop\bin c:\hadoop\bin>hdfs namenode -format 13/11/03 18:07:47 INFO namenode.namenode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode 10STARTUP_MSG: 11STARTUP_MSG: host = ABHIJITG/x.x.x.x args = [-format] 12STARTUP_MSG: version = 2.2.0
12 13STARTUP_MSG: classpath = <classpath jars here> 14STARTUP_MSG: build = Unknown -r Unknown; compiled by ABHIJITG on T13:42Z 16STARTUP_MSG: java = 1.7.0_03 17************************************************************/ 18Formatting using clusterid: CID-1af0bd9f-efee-4d4e-9f03-a0032c22e5eb 1913/11/03 18:07:48 INFO namenode.hostfilemanager: read includes: 20HostSet( 21) 2213/11/03 18:07:48 INFO namenode.hostfilemanager: read excludes: 23HostSet( 24) 2513/11/03 18:07:48 INFO blockmanagement.datanodemanager: 26dfs.block.invalidate.limit= /11/03 18:07:48 INFO util.gset: Computing capacity for map BlocksMap 2813/11/03 18:07:48 INFO util.gset: VM type = 64-bit 2913/11/03 18:07:48 INFO util.gset: 2.0% max memory = MB 3013/11/03 18:07:48 INFO util.gset: capacity = 2^21 = entries 3113/11/03 18:07:48 INFO blockmanagement.blockmanager: 32dfs.block.access.token.enable=false 3313/11/03 18:07:48 INFO blockmanagement.blockmanager: 34defaultReplication = /11/03 18:07:48 INFO blockmanagement.blockmanager: 36maxReplication = /11/03 18:07:48 INFO blockmanagement.blockmanager: 38minReplication = /11/03 18:07:48 INFO blockmanagement.blockmanager: 40maxReplicationStreams = /11/03 18:07:48 INFO blockmanagement.blockmanager: 42shouldCheckForEnoughRacks = false 4313/11/03 18:07:48 INFO blockmanagement.blockmanager: 44replicationRecheckInterval = /11/03 18:07:48 INFO blockmanagement.blockmanager:
13 46encryptDataTransfer = false 4713/11/03 18:07:48 INFO namenode.fsnamesystem: fsowner = ABHIJITG 48(auth:SIMPLE) 4913/11/03 18:07:48 INFO namenode.fsnamesystem: supergroup = 50supergroup 5113/11/03 18:07:48 INFO namenode.fsnamesystem: ispermissionenabled = true 5213/11/03 18:07:48 INFO namenode.fsnamesystem: HA Enabled: false 5313/11/03 18:07:48 INFO namenode.fsnamesystem: Append Enabled: true 5413/11/03 18:07:49 INFO util.gset: Computing capacity for map INodeMap 5513/11/03 18:07:49 INFO util.gset: VM type = 64-bit 5613/11/03 18:07:49 INFO util.gset: 1.0% max memory = MB 5713/11/03 18:07:49 INFO util.gset: capacity = 2^20 = entries 5813/11/03 18:07:49 INFO namenode.namenode: Caching file names occuring more 59than 10 times 6013/11/03 18:07:49 INFO namenode.fsnamesystem: 61dfs.namenode.safemode.threshold-pct = /11/03 18:07:49 INFO namenode.fsnamesystem: 63dfs.namenode.safemode.min.datanodes = /11/03 18:07:49 INFO namenode.fsnamesystem: 65dfs.namenode.safemode.extension = /11/03 18:07:49 INFO namenode.fsnamesystem: Retry cache on namenode is 67enabled 13/11/03 18:07:49 INFO namenode.fsnamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is millis 13/11/03 18:07:49 INFO util.gset: Computing capacity for map Namenode Retry Cache 13/11/03 18:07:49 INFO util.gset: VM type = 64-bit 13/11/03 18:07:49 INFO util.gset: % max memory = MB 13/11/03 18:07:49 INFO util.gset: capacity = 2^15 = entries 13/11/03 18:07:49 INFO common.storage: Storage directory \hadoop\data\dfs\namenode has been successfully formatted. 13/11/03 18:07:49 INFO namenode.fsimage: Saving image file
14 \hadoop\data\dfs\namenode\current\fsimage.ckpt_ using no compression 13/11/03 18:07:49 INFO namenode.fsimage: Image file \hadoop\data\dfs\namenode\current\fsimage.ckpt_ o f size 200 bytes saved in 0 seconds. 13/11/03 18:07:49 INFO namenode.nnstorageretentionmanager: Going to retain 1 images with txid >= 0 13/11/03 18:07:49 INFO util.exitutil: Exiting with status 0 13/11/03 18:07:49 INFO namenode.namenode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at ABHIJITG/x.x.x.x ************************************************************/ Start HDFS (Namenode and Datanode) Command Prompt? 1C:\Users\abhijitg>cd c:\hadoop\sbin 2c:\hadoop\sbin>start-dfs Two separate Command Prompt windows will be opened automatically to run Namenode and Datanode.
15 Start MapReduce aka YARN (Resource Manager and Node Manager) Command Prompt? 1C:\Users\abhijitg>cd c:\hadoop\sbin 2c:\hadoop\sbin>start-yarn 3starting yarn daemons Similarly, two separate Command Prompt windows will be opened automatically to run Resource Manager and Node Manager. Verify Installation If everything goes well then you will be able to open the Resource Manager and Node Manager at and Namenode at Node Manager:
16 Namenode: Install MongoDB Download MongoDB and unzip the file to any location, preferably something simple without spaces in the path (Cygwin doesn t work well with spaces in file names) such as C:\mongodb. You will also need to create a directory to store the data from mongodb. Create a folder called data and a folder called db inside of data so that you have the default directory structure C:\data\db (if you want to use another location for data you can, but you will need to override the default location when you load data into MongoDB).
17 Add C:\mongodb\bin to the Path system variable To start MongoDB open a windows command terminal and type mongod In a second command terminal, type mongo
18 If it is setup properly it should connect and you will see the next line in the prompt change from the windows prompt of C:\Users\UserName> (or whatever directory you re in) to the MongoDB prompt >. Install Mongo-Hadoop Connector Download the Mongo-Hadoop Connector and unzip it into your Cygwin user directory (i.e. C:\cygwin\home\UserName\mongo-hadoop. In a Cygwin terminal, go to this directory by entering the command cd ~/mongo-hadoop. Open the file ~/mongo-hadoop/build.sbt with Notepad++ (or another text editor, but if you use a standard windows text editor, you may have to run unix2dos and dos2unix on the file) and change the line: hadooprelease in ThisBuild := "x.x" To: hadooprelease in ThisBuild := "2.2" Save the file and close it. To compile the connector, enter./sbt package into the cygwin terminal while still in the same directory.
19 Note: Internet connection is required, because the script will be downloading files also. Note: If you are using a 64-bit version of windows, Cygwin will give you a warning that states that the compiler is using a Windows path, but prefers a Cygwin path, but you can ignore this warning, it will still compile fine, However, if you are using a 32-bit version of windows, the connector won t build properly without a Cygwin Path. To fix this, open ~/mongo-hadoop/sbt and change the value of sbt_jar to the Cygwin location of your sbt-launch.jar (should be something like /cygdrive/c/home/username/.sbt/launch/0.12.2/sbt-launch.jar) right before the final command is run (execrunner) at line 460. The build process should generate a JAR file in ~/mongo-hadoop/core/target. You must copy the jars to the lib directory on each node in your hadoop cluster. This should be located at the location $HADOOP_HOME/share/hadoop/common/lib, assuming you are using Hadoop v Download the latest stable version of the mongo java driver and place this jar in this directory as well. Testing the Mongo-Hadoop Connector Everything should now be installed correctly, so all we have left to do is test the configuration. To do this we will run the Treasury_Yield example that came with the mongo-hadoop connector. If you don t already have MongoDB and Hadoop running, start both of them in the way described above. Open a windows command prompt and enter (change the path to your path to yield_historical_in.json): mongoimport d mongo_hadoop c yield_historical.in c:\cygwin64\home\username\mongohadoop\examples\treasury_yield\src\main\resources\yield_historical_in.json To verify this worked go to your mongo.exe that should be running for MongoDB and type: use mongo_hadoop
20 db.yield_historical.in.find().foreach(printjson) You should see a long list of MongoDB documents that look like the following: { } "_id" : ISODate(" T00:00:00Z"), "dayofweek" : "THURSDAY ", "bc3year" : 0.64, "bc5year" : 1.27, "bc10year" : 2.53, "bc20year" : 3.38, "bc1month" : 0.14, "bc2year" : 0.42, "bc3month" : 0.16, "bc30year" : 3.69, "bc1year" : 0.27, "bc7year" : 1.91, "bc6month" : 0.19 From your Cygwin home directory, find the file mongo-hadoop/examples/treasury_yield/src/main/ resources/mongo-treasury_yield. Open the file and change the Output value class for the mapper (lines 91-95) to: <property> <!-- Output value class for the mapper [optional] --> <name>mongo.job.mapper.output.value</name> <value>org.apache.hadoop.io.doublewritable</value> </property> Input & Output Databases: in this file, line 21, <value>mongodb:// /mongo_hadoop.yield_historical.in</value>, is your input MongoDB database.collection, and line 26, <value>mongodb:// /mongo_hadoop.yield_historical.out</value>, is your output MongoDB database.collection. If the output database or collection doesn t exist yet, it will be automatically created. Note: In this file you will find many important options that must be set to run a map reduce job in hadoop. For this example, they are already set for us, however, when you are creating your own map reduce jobs, you will have to set these:
21 Class for Mapper: This must match the class path to the java Mapper class in the jar file. Class for the Reducer: This must match the class path to the java Reducer class in the jar file. Input Format Class: This should always be set to com.mongodb.hadoop.mongoinputformat if input is from MongoDB. Output Format Class: This should always be set to com.mongodb.hadoop.mongooutputformat if output needs to write to MongoDB. Output Key Class for the Output Format: This class path needs to match the class of the key that is being used. Output Key Class for the Mapper: This class path needs to match the class that the third class that the Mapper class extends. For example, in the TreasuryYieldMapper.java file, the class that it matches is the following: public class TreasuryYieldMapper extends Mapper<Object, BSONObject, IntWritable, DoubleWritable> { Class path in the xml file: <value>org.apache.hadoop.io.intwritable</value> Output Value Class for the Mapper: This class path needs to match the class that the fourth class that the Mapper class extends. For example, in the TreasuryYieldMapper.java file, the class that it matches is the following: public class TreasuryYieldMapper extends Mapper<Object, BSONObject, IntWritable, DoubleWritable> { Class path in the xml file: <value>org.apache.hadoop.io.doublewritable</value> Open the file mongo-hadoop/examples/treasury_yield/src/main/java/com/mongodb/hadoop/examples/ treasury/treasuryxmlconfig.java. Change lines to:
22 // Configuration.addDefaultResource("src/examples/hadoop local.xml"); Configuration.addDefaultResource("mongo defaults.xml"); Configuration.addDefaultResource("mongo treasury_yield.xml"); Once this is done, to compile this example cd to ~/mongo-hadoop. Run./sbt treasury example/package. This should create the jar file ~/mongo-hadoop/examples/treasury_yield/target/treasury-example_ jar. Copy this file to your %HADOOP_HOME% directory that you set in windows. Open a command prompt and enter: cd %HADOOP_HOME% hadoop jar treasury example_ jar com.mongodb.hadoop.examples.treasury.treasuryyieldxmlconfig If all has goes well, we should be able to see the output in the mongo.exe. From your mongo.exe enter the following: use mongo_hadoop db.historical_yield.out.find().foreach(printjson) You should now see the output from the map reduce job. Sources:
Set JAVA PATH in Linux Environment. Edit.bashrc and add below 2 lines $vi.bashrc export JAVA_HOME=/usr/lib/jvm/java-7-oracle/
Download the Hadoop tar. Download the Java from Oracle - Unpack the Comparisons -- $tar -zxvf hadoop-2.6.0.tar.gz $tar -zxf jdk1.7.0_60.tar.gz Set JAVA PATH in Linux Environment. Edit.bashrc and add below
More informationHow To Install Hadoop 1.2.1.1 From Apa Hadoop 1.3.2 To 1.4.2 (Hadoop)
Contents Download and install Java JDK... 1 Download the Hadoop tar ball... 1 Update $HOME/.bashrc... 3 Configuration of Hadoop in Pseudo Distributed Mode... 4 Format the newly created cluster to create
More informationApache Hadoop 2.0 Installation and Single Node Cluster Configuration on Ubuntu A guide to install and setup Single-Node Apache Hadoop 2.
EDUREKA Apache Hadoop 2.0 Installation and Single Node Cluster Configuration on Ubuntu A guide to install and setup Single-Node Apache Hadoop 2.0 Cluster edureka! 11/12/2013 A guide to Install and Configure
More informationInstallation Guide Setting Up and Testing Hadoop on Mac By Ryan Tabora, Think Big Analytics
Installation Guide Setting Up and Testing Hadoop on Mac By Ryan Tabora, Think Big Analytics www.thinkbiganalytics.com 520 San Antonio Rd, Suite 210 Mt. View, CA 94040 (650) 949-2350 Table of Contents OVERVIEW
More informationTP1: Getting Started with Hadoop
TP1: Getting Started with Hadoop Alexandru Costan MapReduce has emerged as a leading programming model for data-intensive computing. It was originally proposed by Google to simplify development of web
More informationHSearch Installation
To configure HSearch you need to install Hadoop, Hbase, Zookeeper, HSearch and Tomcat. 1. Add the machines ip address in the /etc/hosts to access all the servers using name as shown below. 2. Allow all
More informationHadoop (pseudo-distributed) installation and configuration
Hadoop (pseudo-distributed) installation and configuration 1. Operating systems. Linux-based systems are preferred, e.g., Ubuntu or Mac OS X. 2. Install Java. For Linux, you should download JDK 8 under
More informationCS380 Final Project Evaluating the Scalability of Hadoop in a Real and Virtual Environment
CS380 Final Project Evaluating the Scalability of Hadoop in a Real and Virtual Environment James Devine December 15, 2008 Abstract Mapreduce has been a very successful computational technique that has
More informationDeploying MongoDB and Hadoop to Amazon Web Services
SGT WHITE PAPER Deploying MongoDB and Hadoop to Amazon Web Services HCCP Big Data Lab 2015 SGT, Inc. All Rights Reserved 7701 Greenbelt Road, Suite 400, Greenbelt, MD 20770 Tel: (301) 614-8600 Fax: (301)
More informationHADOOP - MULTI NODE CLUSTER
HADOOP - MULTI NODE CLUSTER http://www.tutorialspoint.com/hadoop/hadoop_multi_node_cluster.htm Copyright tutorialspoint.com This chapter explains the setup of the Hadoop Multi-Node cluster on a distributed
More informationTableau Spark SQL Setup Instructions
Tableau Spark SQL Setup Instructions 1. Prerequisites 2. Configuring Hive 3. Configuring Spark & Hive 4. Starting the Spark Service and the Spark Thrift Server 5. Connecting Tableau to Spark SQL 5A. Install
More informationEasily parallelize existing application with Hadoop framework Juan Lago, July 2011
Easily parallelize existing application with Hadoop framework Juan Lago, July 2011 There are three ways of installing Hadoop: Standalone (or local) mode: no deamons running. Nothing to configure after
More informationSingle Node Hadoop Cluster Setup
Single Node Hadoop Cluster Setup This document describes how to create Hadoop Single Node cluster in just 30 Minutes on Amazon EC2 cloud. You will learn following topics. Click Here to watch these steps
More informationSpectrum Scale HDFS Transparency Guide
Spectrum Scale Guide Spectrum Scale BDA 2016-1-5 Contents 1. Overview... 3 2. Supported Spectrum Scale storage mode... 4 2.1. Local Storage mode... 4 2.2. Shared Storage Mode... 4 3. Hadoop cluster planning...
More informationIDS 561 Big data analytics Assignment 1
IDS 561 Big data analytics Assignment 1 Due Midnight, October 4th, 2015 General Instructions The purpose of this tutorial is (1) to get you started with Hadoop and (2) to get you acquainted with the code
More informationApplication Servers - BEA WebLogic. Installing the Application Server
Proven Practice Application Servers - BEA WebLogic. Installing the Application Server Product(s): IBM Cognos 8.4, BEA WebLogic Server Area of Interest: Infrastructure DOC ID: AS01 Version 8.4.0.0 Application
More information研 發 專 案 原 始 程 式 碼 安 裝 及 操 作 手 冊. Version 0.1
102 年 度 國 科 會 雲 端 計 算 與 資 訊 安 全 技 術 研 發 專 案 原 始 程 式 碼 安 裝 及 操 作 手 冊 Version 0.1 總 計 畫 名 稱 : 行 動 雲 端 環 境 動 態 群 組 服 務 研 究 與 創 新 應 用 子 計 畫 一 : 行 動 雲 端 群 組 服 務 架 構 與 動 態 群 組 管 理 (NSC 102-2218-E-259-003) 計
More informationHADOOP MOCK TEST HADOOP MOCK TEST II
http://www.tutorialspoint.com HADOOP MOCK TEST Copyright tutorialspoint.com This section presents you various set of Mock Tests related to Hadoop Framework. You can download these sample mock tests at
More informationThe objective of this lab is to learn how to set up an environment for running distributed Hadoop applications.
Lab 9: Hadoop Development The objective of this lab is to learn how to set up an environment for running distributed Hadoop applications. Introduction Hadoop can be run in one of three modes: Standalone
More informationInstalling Ruby on Windows XP
Table of Contents 1 Installation...2 1.1 Installing Ruby... 2 1.1.1 Downloading...2 1.1.2 Installing Ruby...2 1.1.3 Testing Ruby Installation...6 1.2 Installing Ruby DevKit... 7 1.3 Installing Ruby Gems...
More informationWA2321 - Continuous Integration with Jenkins- CI, Maven and Nexus. Classroom Setup Guide. Web Age Solutions Inc. Web Age Solutions Inc.
WA2321 - Continuous Integration with Jenkins- CI, Maven and Nexus Classroom Setup Guide Web Age Solutions Inc. Web Age Solutions Inc. 1 Table of Contents Part 1 - Minimum Hardware Requirements...3 Part
More informationWA1826 Designing Cloud Computing Solutions. Classroom Setup Guide. Web Age Solutions Inc. Copyright Web Age Solutions Inc. 1
WA1826 Designing Cloud Computing Solutions Classroom Setup Guide Web Age Solutions Inc. Copyright Web Age Solutions Inc. 1 Table of Contents Part 1 - Minimum Hardware Requirements...3 Part 2 - Minimum
More informationInstalling Hadoop. You need a *nix system (Linux, Mac OS X, ) with a working installation of Java 1.7, either OpenJDK or the Oracle JDK. See, e.g.
Big Data Computing Instructor: Prof. Irene Finocchi Master's Degree in Computer Science Academic Year 2013-2014, spring semester Installing Hadoop Emanuele Fusco (fusco@di.uniroma1.it) Prerequisites You
More informationClick Start > Control Panel > System icon to open System Properties dialog box. Click Advanced > Environment Variables.
Configure Java environment on Windows After installing Java Development Kit on Windows, you may still need to do some configuration to get Java ready for compiling and executing Java programs. The following
More informationSonatype CLM Enforcement Points - Continuous Integration (CI) Sonatype CLM Enforcement Points - Continuous Integration (CI)
Sonatype CLM Enforcement Points - Continuous Integration (CI) i Sonatype CLM Enforcement Points - Continuous Integration (CI) Sonatype CLM Enforcement Points - Continuous Integration (CI) ii Contents 1
More informationHadoop Tutorial. General Instructions
CS246: Mining Massive Datasets Winter 2016 Hadoop Tutorial Due 11:59pm January 12, 2016 General Instructions The purpose of this tutorial is (1) to get you started with Hadoop and (2) to get you acquainted
More informationDeploying Cloudera CDH (Cloudera Distribution Including Apache Hadoop) with Emulex OneConnect OCe14000 Network Adapters
Deploying Cloudera CDH (Cloudera Distribution Including Apache Hadoop) with Emulex OneConnect OCe14000 Network Adapters Table of Contents Introduction... Hardware requirements... Recommended Hadoop cluster
More information1. GridGain In-Memory Accelerator For Hadoop. 2. Hadoop Installation. 2.1 Hadoop 1.x Installation
1. GridGain In-Memory Accelerator For Hadoop GridGain's In-Memory Accelerator For Hadoop edition is based on the industry's first high-performance dual-mode in-memory file system that is 100% compatible
More informationForce.com Migration Tool Guide
Force.com Migration Tool Guide Version 35.0, Winter 16 @salesforcedocs Last updated: October 29, 2015 Copyright 2000 2015 salesforce.com, inc. All rights reserved. Salesforce is a registered trademark
More informationKognitio Technote Kognitio v8.x Hadoop Connector Setup
Kognitio Technote Kognitio v8.x Hadoop Connector Setup For External Release Kognitio Document No Authors Reviewed By Authorised By Document Version Stuart Watt Date Table Of Contents Document Control...
More informationHadoop Installation MapReduce Examples Jake Karnes
Big Data Management Hadoop Installation MapReduce Examples Jake Karnes These slides are based on materials / slides from Cloudera.com Amazon.com Prof. P. Zadrozny's Slides Prerequistes You must have an
More informationGetting Started using the SQuirreL SQL Client
Getting Started using the SQuirreL SQL Client The SQuirreL SQL Client is a graphical program written in the Java programming language that will allow you to view the structure of a JDBC-compliant database,
More informationCOURSE CONTENT Big Data and Hadoop Training
COURSE CONTENT Big Data and Hadoop Training 1. Meet Hadoop Data! Data Storage and Analysis Comparison with Other Systems RDBMS Grid Computing Volunteer Computing A Brief History of Hadoop Apache Hadoop
More informationData Analytics. CloudSuite1.0 Benchmark Suite Copyright (c) 2011, Parallel Systems Architecture Lab, EPFL. All rights reserved.
Data Analytics CloudSuite1.0 Benchmark Suite Copyright (c) 2011, Parallel Systems Architecture Lab, EPFL All rights reserved. The data analytics benchmark relies on using the Hadoop MapReduce framework
More informationOnline Backup Linux Client User Manual
Online Backup Linux Client User Manual Software version 4.0.x For Linux distributions August 2011 Version 1.0 Disclaimer This document is compiled with the greatest possible care. However, errors might
More informationProject 5 Twitter Analyzer Due: Fri. 2015-12-11 11:59:59 pm
Project 5 Twitter Analyzer Due: Fri. 2015-12-11 11:59:59 pm Goal. In this project you will use Hadoop to build a tool for processing sets of Twitter posts (i.e. tweets) and determining which people, tweets,
More informationEclipse installation, configuration and operation
Eclipse installation, configuration and operation This document aims to walk through the procedures to setup eclipse on different platforms for java programming and to load in the course libraries for
More information24x7 Scheduler Multi-platform Edition 5.2
24x7 Scheduler Multi-platform Edition 5.2 Installing and Using 24x7 Web-Based Management Console with Apache Tomcat web server Copyright SoftTree Technologies, Inc. 2004-2014 All rights reserved Table
More informationDeploy Apache Hadoop with Emulex OneConnect OCe14000 Ethernet Network Adapters
CONNECT - Lab Guide Deploy Apache Hadoop with Emulex OneConnect OCe14000 Ethernet Network Adapters Hardware, software and configuration steps needed to deploy Apache Hadoop 2.4.1 with the Emulex family
More informationConfiguration Manual Yahoo Cloud System Benchmark (YCSB) 24-Mar-14 SEECS-NUST Faria Mehak
Configuration Manual Yahoo Cloud System Benchmark (YCSB) 24-Mar-14 SEECS-NUST Faria Mehak Table of Contents 1 Introduction... 3 1.1 Purpose... 3 1.2 Product Information... 3 2 Installation Manual... 3
More informationMarkLogic Server. MarkLogic Connector for Hadoop Developer s Guide. MarkLogic 8 February, 2015
MarkLogic Connector for Hadoop Developer s Guide 1 MarkLogic 8 February, 2015 Last Revised: 8.0-3, June, 2015 Copyright 2015 MarkLogic Corporation. All rights reserved. Table of Contents Table of Contents
More informationSimba XMLA Provider for Oracle OLAP 2.0. Linux Administration Guide. Simba Technologies Inc. April 23, 2013
Simba XMLA Provider for Oracle OLAP 2.0 April 23, 2013 Simba Technologies Inc. Copyright 2013 Simba Technologies Inc. All Rights Reserved. Information in this document is subject to change without notice.
More informationMatisse Installation Guide for MS Windows
Matisse Installation Guide for MS Windows July 2013 Matisse Installation Guide for MS Windows Copyright 2013 Matisse Software Inc. All Rights Reserved. This manual and the software described in it are
More informationCode Estimation Tools Directions for a Services Engagement
Code Estimation Tools Directions for a Services Engagement Summary Black Duck software provides two tools to calculate size, number, and category of files in a code base. This information is necessary
More informationRecoveryVault Express Client User Manual
For Linux distributions Software version 4.1.7 Version 2.0 Disclaimer This document is compiled with the greatest possible care. However, errors might have been introduced caused by human mistakes or by
More informationServletExec TM 6.0 Installation Guide. for Microsoft Internet Information Server SunONE Web Server Sun Java System Web Server and Apache HTTP Server
ServletExec TM 6.0 Installation Guide for Microsoft Internet Information Server SunONE Web Server Sun Java System Web Server and Apache HTTP Server ServletExec TM NEW ATLANTA COMMUNICATIONS, LLC 6.0 Installation
More informationJava Language Tools COPYRIGHTED MATERIAL. Part 1. In this part...
Part 1 Java Language Tools This beginning, ground-level part presents reference information for setting up the Java development environment and for compiling and running Java programs. This includes downloading
More informationHow to Install and Configure EBF15328 for MapR 4.0.1 or 4.0.2 with MapReduce v1
How to Install and Configure EBF15328 for MapR 4.0.1 or 4.0.2 with MapReduce v1 1993-2015 Informatica Corporation. No part of this document may be reproduced or transmitted in any form, by any means (electronic,
More information1. Product Information
ORIXCLOUD BACKUP CLIENT USER MANUAL LINUX 1. Product Information Product: Orixcloud Backup Client for Linux Version: 4.1.7 1.1 System Requirements Linux (RedHat, SuSE, Debian and Debian based systems such
More informationPlug-In for Informatica Guide
HP Vertica Analytic Database Software Version: 7.0.x Document Release Date: 2/20/2015 Legal Notices Warranty The only warranties for HP products and services are set forth in the express warranty statements
More informationOnline Backup Client User Manual
For Linux distributions Software version 4.1.7 Version 2.0 Disclaimer This document is compiled with the greatest possible care. However, errors might have been introduced caused by human mistakes or by
More informationOnline Backup Client User Manual Linux
Online Backup Client User Manual Linux 1. Product Information Product: Online Backup Client for Linux Version: 4.1.7 1.1 System Requirements Operating System Linux (RedHat, SuSE, Debian and Debian based
More informationInstalling the Android SDK
Installing the Android SDK To get started with development, we first need to set up and configure our PCs for working with Java, and the Android SDK. We ll be installing and configuring four packages today
More informationTCH Forecaster Installation Instructions
RESOURCE AND PATIENT MANAGEMENT SYSTEM TCH Forecaster Installation Instructions (BI) Addendum to Installation Guide and Release Notes Version 8.5 patch 8 Office of Information Technology Division of Information
More informationEVALUATION ONLY. WA2088 WebSphere Application Server 8.5 Administration on Windows. Student Labs. Web Age Solutions Inc.
WA2088 WebSphere Application Server 8.5 Administration on Windows Student Labs Web Age Solutions Inc. Copyright 2013 Web Age Solutions Inc. 1 Table of Contents Directory Paths Used in Labs...3 Lab Notes...4
More informationThis handout describes how to start Hadoop in distributed mode, not the pseudo distributed mode which Hadoop comes preconfigured in as on download.
AWS Starting Hadoop in Distributed Mode This handout describes how to start Hadoop in distributed mode, not the pseudo distributed mode which Hadoop comes preconfigured in as on download. 1) Start up 3
More informationHadoop Installation. Sandeep Prasad
Hadoop Installation Sandeep Prasad 1 Introduction Hadoop is a system to manage large quantity of data. For this report hadoop- 1.0.3 (Released, May 2012) is used and tested on Ubuntu-12.04. The system
More informationEMC Documentum Connector for Microsoft SharePoint
EMC Documentum Connector for Microsoft SharePoint Version 7.1 Installation Guide EMC Corporation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Legal Notice Copyright 2013-2014
More informationAWS Schema Conversion Tool. User Guide Version 1.0
AWS Schema Conversion Tool User Guide AWS Schema Conversion Tool: User Guide Copyright 2016 Amazon Web Services, Inc. and/or its affiliates. All rights reserved. Amazon's trademarks and trade dress may
More informationOnline Backup Client User Manual
Online Backup Client User Manual Software version 3.21 For Linux distributions January 2011 Version 2.0 Disclaimer This document is compiled with the greatest possible care. However, errors might have
More informationWitango Application Server 6. Installation Guide for OS X
Witango Application Server 6 Installation Guide for OS X January 2011 Tronics Software LLC 503 Mountain Ave. Gillette, NJ 07933 USA Telephone: (570) 647 4370 Email: support@witango.com Web: www.witango.com
More informationInstalling Java. Table of contents
Table of contents 1 Jargon...3 2 Introduction...4 3 How to install the JDK...4 3.1 Microsoft Windows 95... 4 3.1.1 Installing the JDK... 4 3.1.2 Setting the Path Variable...5 3.2 Microsoft Windows 98...
More information3. Installation and Configuration. 3.1 Java Development Kit (JDK)
3. Installation and Configuration 3.1 Java Development Kit (JDK) The Java Development Kit (JDK) which includes the Java Run-time Environment (JRE) is necessary in order for Apache Tomcat to operate properly
More informationIUCLID 5 Guidance and support. Installation Guide Distributed Version. Linux - Apache Tomcat - PostgreSQL
IUCLID 5 Guidance and support Installation Guide Distributed Version Linux - Apache Tomcat - PostgreSQL June 2009 Legal Notice Neither the European Chemicals Agency nor any person acting on behalf of the
More informationConfiguring multiple Tomcat instances with a single Apache Load Balancer
Configuring multiple Tomcat instances with a single Apache Load Balancer How to set up Tomcat and Apache for load balancing HP Software Service Management Introduction... 2 Prerequisites... 2 Configuring
More informationJava Software Development Kit (JDK 5.0 Update 14) Installation Step by Step Instructions
Java Software Development Kit (JDK 5.0 Update 14) Installation Step by Step Instructions 1. Click the download link Download the Java Software Development Kit (JDK 5.0 Update 14) from Sun Microsystems
More informationH2O on Hadoop. September 30, 2014. www.0xdata.com
H2O on Hadoop September 30, 2014 www.0xdata.com H2O on Hadoop Introduction H2O is the open source math & machine learning engine for big data that brings distribution and parallelism to powerful algorithms
More informationHow To Use Hadoop
Hadoop in Action Justin Quan March 15, 2011 Poll What s to come Overview of Hadoop for the uninitiated How does Hadoop work? How do I use Hadoop? How do I get started? Final Thoughts Key Take Aways Hadoop
More informationHadoop Basics with InfoSphere BigInsights
An IBM Proof of Technology Hadoop Basics with InfoSphere BigInsights Part: 1 Exploring Hadoop Distributed File System An IBM Proof of Technology Catalog Number Copyright IBM Corporation, 2013 US Government
More informationJobScheduler Installation by Copying
JobScheduler - Job Execution and Scheduling System JobScheduler Installation by Copying Deployment of multiple JobSchedulers on distributed servers by copying a template JobScheduler March 2015 March 2015
More informationSetup Guide for HDP Developer: Storm. Revision 1 Hortonworks University
Setup Guide for HDP Developer: Storm Revision 1 Hortonworks University Overview The Hortonworks Training Course that you are attending is taught using a virtual machine (VM) for the lab environment. Before
More informationCompiere ERP & CRM Installation Instructions Linux System - EnterpriseDB
Compiere ERP & CRM Installation Instructions Linux System - EnterpriseDB Compiere Learning Services Division Copyright 2007 Compiere, inc. All rights reserved www.compiere.com Table of Contents Compiere
More informationApplication Notes for Packaging and Deploying Avaya Communications Process Manager Sample SDK Web Application on a JBoss Application Server Issue 1.
Avaya Solution & Interoperability Test Lab Application Notes for Packaging and Deploying Avaya Communications Process Manager Sample SDK Web Application on a JBoss Application Server Issue 1.0 Abstract
More informationCompiere ERP & CRM Installation Instructions Windows System - EnterpriseDB
Compiere ERP & CRM Installation Instructions Windows System - EnterpriseDB Compiere Learning Services Division Copyright 2007 Compiere, inc. All rights reserved www.compiere.com Table of Contents Compiere
More informationHadoop MapReduce Tutorial - Reduce Comp variability in Data Stamps
Distributed Recommenders Fall 2010 Distributed Recommenders Distributed Approaches are needed when: Dataset does not fit into memory Need for processing exceeds what can be provided with a sequential algorithm
More informationHadoop 2.6.0 Setup Walkthrough
Hadoop 2.6.0 Setup Walkthrough This document provides information about working with Hadoop 2.6.0. 1 Setting Up Configuration Files... 2 2 Setting Up The Environment... 2 3 Additional Notes... 3 4 Selecting
More informationSingle Node Setup. Table of contents
Table of contents 1 Purpose... 2 2 Prerequisites...2 2.1 Supported Platforms...2 2.2 Required Software... 2 2.3 Installing Software...2 3 Download...2 4 Prepare to Start the Hadoop Cluster... 3 5 Standalone
More informationThird-Party Software Support. Converting from SAS Table Server to a SQL Server Database
Third-Party Software Support Converting from SAS Table Server to a SQL Server Database Table of Contents Prerequisite Steps... 1 Database Migration Instructions for the WebSphere Application Server...
More informationHDFS Installation and Shell
2012 coreservlets.com and Dima May HDFS Installation and Shell Originals of slides and source code for examples: http://www.coreservlets.com/hadoop-tutorial/ Also see the customized Hadoop training courses
More informationPMOD Installation on Windows Systems
User's Guide PMOD Installation on Windows Systems Version 3.7 PMOD Technologies Windows Installation The installation for all types of PMOD systems starts with the software extraction from the installation
More informationReflection DBR USER GUIDE. Reflection DBR User Guide. 995 Old Eagle School Road Suite 315 Wayne, PA 19087 USA 610.964.8000 www.evolveip.
Reflection DBR USER GUIDE 995 Old Eagle School Road Suite 315 Wayne, PA 19087 USA 610.964.8000 www.evolveip.net Page 1 of 1 Table of Contents Overview 3 Reflection DBR Client and Console Installation 4
More informationHadoop Distributed Filesystem. Spring 2015, X. Zhang Fordham Univ.
Hadoop Distributed Filesystem Spring 2015, X. Zhang Fordham Univ. MapReduce Programming Model Split Shuffle Input: a set of [key,value] pairs intermediate [key,value] pairs [k1,v11,v12, ] [k2,v21,v22,
More informationPractice Fusion API Client Installation Guide for Windows
Practice Fusion API Client Installation Guide for Windows Quickly and easily connect your Results Information System with Practice Fusion s Electronic Health Record (EHR) System Table of Contents Introduction
More informationUSING HDFS ON DISCOVERY CLUSTER TWO EXAMPLES - test1 and test2
USING HDFS ON DISCOVERY CLUSTER TWO EXAMPLES - test1 and test2 (Using HDFS on Discovery Cluster for Discovery Cluster Users email n.roy@neu.edu if you have questions or need more clarifications. Nilay
More informationImportant Notice. (c) 2010-2013 Cloudera, Inc. All rights reserved.
Hue 2 User Guide Important Notice (c) 2010-2013 Cloudera, Inc. All rights reserved. Cloudera, the Cloudera logo, Cloudera Impala, and any other product or service names or slogans contained in this document
More informationWitango Application Server 6. Installation Guide for Windows
Witango Application Server 6 Installation Guide for Windows December 2010 Tronics Software LLC 503 Mountain Ave. Gillette, NJ 07933 USA Telephone: (570) 647 4370 Email: support@witango.com Web: www.witango.com
More informationBuilding graphic-rich and better performing native applications. Pro. Android C++ with the NDK. Onur Cinar
Building graphic-rich and better performing native applications Pro Android C++ with the NDK Onur Cinar For your convenience Apress has placed some of the front matter material after the index. Please
More informationSymantec Enterprise Solution for Hadoop Installation and Administrator's Guide 1.0
Symantec Enterprise Solution for Hadoop Installation and Administrator's Guide 1.0 The software described in this book is furnished under a license agreement and may be used only in accordance with the
More informationSIMIAN systems. Setting up a Sitellite development environment on Windows. Sitellite Content Management System
Setting up a Sitellite development environment on Windows Sitellite Content Management System Introduction For live deployment, it is strongly recommended that Sitellite be installed on a Unix-based operating
More informationSetting Up a Windows Virtual Machine for SANS FOR526
Setting Up a Windows Virtual Machine for SANS FOR526 As part of the Windows Memory Forensics course, SANS FOR526, you will need to create a Windows virtual machine to use in class. We recommend using VMware
More informationGetting to know Apache Hadoop
Getting to know Apache Hadoop Oana Denisa Balalau Télécom ParisTech October 13, 2015 1 / 32 Table of Contents 1 Apache Hadoop 2 The Hadoop Distributed File System(HDFS) 3 Application management in the
More informationVirtual Machine (VM) For Hadoop Training
2012 coreservlets.com and Dima May Virtual Machine (VM) For Hadoop Training Originals of slides and source code for examples: http://www.coreservlets.com/hadoop-tutorial/ Also see the customized Hadoop
More informationMigrating MSDE to Microsoft SQL 2008 R2 Express
How To Updated: 11/11/2011 2011 Shelby Systems, Inc. All Rights Reserved Other brand and product names are trademarks or registered trademarks of the respective holders. If you are still on MSDE 2000,
More informationEMC Clinical Archiving
EMC Clinical Archiving Version 1.7 Installation Guide EMC Corporation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Legal Notice Copyright 2014-2015 EMC Corporation. All Rights
More informationPN 00651. Connect:Enterprise Secure FTP Client Release Notes Version 1.2.00
PN 00651 Connect:Enterprise Secure FTP Client Release Notes Version 1.2.00 Connect:Enterprise Secure FTP Client Release Notes Version 1.2.00 First Edition This documentation was prepared to assist licensed
More informationVoyager Reporting System (VRS) Installation Guide. Revised 5/09/06
Voyager Reporting System (VRS) Installation Guide Revised 5/09/06 System Requirements Verification 1. Verify that the workstation s Operating System is Windows 2000 or Higher. 2. Verify that Microsoft
More informationAhsay Offsite Backup Server and Ahsay Replication Server
Ahsay Offsite Backup Server and Ahsay Replication Server v6 Ahsay Systems Corporation Limited 19 April 2013 Ahsay Offsite Backup Server and Ahsay Replication Server Copyright Notice 2013 Ahsay Systems
More informationIntel Integrated Native Developer Experience (INDE): IDE Integration for Android*
Intel Integrated Native Developer Experience (INDE): IDE Integration for Android* 1.5.8 Overview IDE Integration for Android provides productivity-oriented design, coding, and debugging tools for applications
More informationMapReduce. Tushar B. Kute, http://tusharkute.com
MapReduce Tushar B. Kute, http://tusharkute.com What is MapReduce? MapReduce is a framework using which we can write applications to process huge amounts of data, in parallel, on large clusters of commodity
More information