HiPerDNO. High Performance Computing Technologies for Smart Distribution Network Operation FP
|
|
- Marilyn Robbins
- 8 years ago
- Views:
Transcription
1 HiPerDNO High Performance Computing Technologies for Smart Distribution Network Operation FP Project coordinator: Dr Gareth Taylor (BU) Consortium Members: Brunel University (BU), Electricite de France (EF), IBM Israel, University of Oxford, UK Power Networks plc (UKPN), Union Fenosa Distribution (Union Fenosa), Indra Sistemas, GTD Systems de Information (GTD), Korona Inzeniting (Korona), Elektro Gorenjska Podjetje za Distribucijo Elektricne Energije (EG) and Fraunhofer IWES Document Title Prototype deployment of selected architecture and necessary documentation, D1.2.2 Document Identifier HiPerDNO/2011/D Version Version 1.0 Work package number Sub-Work package Number Distribution Reporting consortium member WP1 WP1.2 Public Oxford Internal reviewer & review date David Wallom (July 2011) FP
2 Prototype deployment of selected architecture and necessary documentation, D1.2.2 Executive Summary This document presents the HiPerDNO HPC Platform prototype developed in WP1.2. The prototype covers all major components of the HiPerDNO HPC platform, and it was presented during the HiPerDNO Quarterly Progress Meeting in Clamart on 29 th 30 th June The prototype can be downloaded from the FTP server hosted by Brunel University. The prototype can be installed and run following the instruction and tutorial in this document. This document also lists the requirements for the HPC scheduler. These requirements are met by the HPC prototype, and represent our current understanding, formed through interaction with the HiPerDNO partners, of the needs of DNO applications. Obviously, as collaborations with partners progress so will the specification and deployment of their needs within the HiPerDNO HPC Platform. FP
3 Document Information HiPerDNO Project Number Full Title FP Acronym HiPerDNO Prototype deployment of selected architecture and necessary documentation, D1.2.2 Project URL Document URL N/A Deliverable Number D Title Prototype deployment of selected architecture and necessary documentation (M18) N/A Work Number Package WP1.2 Title Investigation and evaluation of new HPC platforms and architectures. Date of Delivery Work Plan Actual Status Version 1.0 Nature Prototype Internal Report External Report Dissemination Dissemination Level Public Consortium Author(s) (Partners) Lead Author Abstract Stef Salvini, David Wallom Name Piotr Lopatka Partner Oxford Phone This document and the prototype of the HPC System constitute deliverable D The prototype covers all major components of the HiPerDNO HPC platform, and it was presented during the HiPerDNO Quarterly Progress Meeting in Clamart on 29 th 30 th June The prototype can be downloaded from the FTP server hosted by Brunel University. Access details can be found on Basecamp. This document describes the prototype, its installation and use. Keywords prototype, virtualization, HPC, Torque, MAUI, HDFS, NFS FP
4 Table of Contents 1. Introduction Prototype Description... 6 Purpose of the Prototype... 6 Functional Description of HPC Prototype... 6 HPC Interface to the Data System... 7 Applications scheduling Prototype Deployment Installation Tutorial... 9 Prerequisites... 9 Creation of VM... 9 Virtual network setup Head-Node: hpc Compute-Nodes: hpc1, hpc2, hpc3, hpc Fast Storage node: storage Data System nodes: hdfs1, hdfs Testing the Prototype DNO-like model applications Running the example Other key commands (HDFS, Torque, MAUI) Editing schedules From Prototype to System Deployment References FP
5 1. Introduction This document describes the prototype of the HiPerDNO HPC Platform that was described and demonstrated at the HiPerDNO Quarterly Progress Meeting in Clamart on 29 th -30 th June The prototype implemented the design concepts introduced in the Deliverable The chapters of this document discuss the prototype implementation, deployment and the tests that were conducted. The HPC prototype software package has been made available to all partners through the FTP server hosted by Brunel University. Installation instructions are included in this deliverable. FP
6 2. Prototype Description Essential reading : HiPerDNO deliverables [1], (particularly chapters 2 and 3) [7]. The prototype consists of the HPC Engine (HPCE) and the Data Subsystem (DSS). The HPCE schedules and executes the DNO applications making optimal usage of resources while ensuring appropriate balance and priority between mission critical and non-critical applications. The Data System is a resilient and secure repository for the data necessary to execute DNO applications. A fast storage system is an essential component of the HPCE: it is a local storage system for compute nodes, where applications as well as their current operational data (scratch data) are stored. Purpose of the Prototype The purpose of this prototype is two-fold. In the first place, it allowed both feasibility and implementation studies of the system architecture presented in [1], as it covers all major components of the HiPerDNO HPC Platform as defined in that document. In the second place, the prototype allows the testing of the DNO application scenarios to run on the full HPC system. Test applications of appropriate scheduling requirements and characteristics were employed during the demonstration (see chapter 4). It also allows the integration and test deployment of real DNO applications, which is the subject of ongoing work. Functional Description of HPC Prototype The HPC prototype consists of the HPCE and the DSS and it is depicted in Figure 1. One of the nodes in the HPCE is the head-node. In the prototype it plays also the role of the DMS, responsible for requesting execution of jobs in the HPC and it includes the scheduler and its required mechanisms. The HPC nodes require the presence of a common storage space (Fast Storage in figure 1). Figure 1. The HPC Prototype. FP
7 HPC Interface to the Data System All data are stored in the DSS. The DSS is not visible to the applications or indeed to either the DMS or the HPCE: data can only be accessed through appropriate client-server mechanisms. Both retrieval and storage of data are carried out through the client-server interface. In the prototype, data retrieval and storage results for the model applications are greatly simplified. We expect that in the HiperDNO HPC Platform appropriate data design, including metadata, will be carried out, but this is application-dependent and details will emerge in due course through the collaborations already started. The prototype relies on meta-data protocol that allows identification of the types of the application and of data access. Exchange of meta-data precedes exchange of content. The client-approach also allows changing the DSS technology (currently HDFS), without affecting the rest of the system and can be security hardened. Applications scheduling The scheduler ensures that applications are executed according to the execution priorities set by the DNOs. Based on collaboration with partners, we (WP1.2) have laid out the preliminary scheduler requirements for HPC applications, and are further discussed in chapter 4 below, and in [7]. We would like to stress that these requirements are not final: they corresponds to our current understanding, and no doubt will be modified following the DNOs feedback and the emergence of fully defined requirements as the project progresses. High-level requirements that must be supported by the scheduler are pre-emption and partitioning. Pre-emption allows the HPC system to stop a running task in order to make resources available for a high-priority task (kill-and-requeue). Low priority jobs are killed and re-added to the appropriate scheduler queue, without altering their job id, hence their place in the appropriate queue. Pre-empted tasks can then be restarted as soon as HPC Engine resources become available. Partitioning allows limiting the amount of resources that can be dedicated to jobs of a specific priority. The scheduler requirements are listed in greater detail in table 1. We have employed the Maui scheduler in the prototype, sitting on the Torque resource manager. Maui is a freely available, opensource package, and it supports all the requirements in table 1, with the exception of the optional no 6. Maui does not allow medium priority jobs to be both pre-emptable by higher priority jobs and at the same time pre-emptying lower priority ones: roles of pre-emptee and pre-emptor, so to speak, are incompatible, although this might be further studied. In any case, we do not believe that this has any major impact at this stage of the HiPerDNO project. The suggested partitioning of computing resources is adequate for current needs. Notice that we had to create a patch to ensure that Maui satisfies requirements no. 5 in table 1. This patch is of course part of this deliverable and is made available together will other sources, scripts and model applications. FP
8 No. Type Requirement 1. Resources Must The scheduler supports execution of jobs requiring 1...n compute nodes. 2. Pre-emption Must The scheduler supports pre-emption. 3. Pre-emption Must Jobs can be classified as critical and non-critical (critical jobs always pre-empt non-critical jobs). 4. Pre-emption Must Requeueing mechanism is supported, i.e. killing jobs and putting them at the top of their queue for execution. 5. Pre-emption Must Critical job can pre-empt as many low-priority jobs as necessary (but no more) to get the required resources for execution. 6. Pre-emption Nice to have Another class of jobs can be defined with intermediate priority level between critical and non-critical. These jobs can pre-empt non-critical jobs, however they can be preempted by critical jobs at the same time (pre-emption cascade). 7. Partitioning Must If no. 6 is not possible, the scheduler must allow the partitioning of resources. The amount of compute resources allocated to a certain class of critical jobs can be limited. 8. Scheduling latency Must To satisfy mission-critical requirements, high priority jobs must be run when they are submitted. Pre-emption and scheduling frequency must be sufficiently fine-grained to allow the stopping of low priority jobs and starting of high-priority jobs quickly and effectively. (Order of 1 second?) Table 1. HiPerDNO scheduler requirements. FP
9 3. Prototype Deployment Installation Tutorial The following sections discuss the installation of the HPC prototype. The prototype is to be deployed in a virtualized environment. We would like to stress that the target deployment infrastructure for the HPC system will be the real hardware. This is however the part of on-going work and not a part of this tutorial (see also Chapter 5). A number of prerequisites must be met before starting the HPC prototype installation. The deployment consists of two stages. The first stage involves the creation of a virtual cluster (nodes and network), the installation of the operating system and the network configuration. The second stage involves the installation and configuration of software which is central to the prototype: resource manager, scheduler, fast storage and data system software. Prerequisites The following list presents all necessary components for the deployment of the prototype. We suggest using VirtualBox as virtualization software and Ubuntu Server as an operating system for the nodes. Host o Hardware: Desktop PC (we used: Intel Core 2 quad, 8 GB RAM) o OS: Linux (we used: Ubuntu 10.04) Virtual Machines (VM) o Virtualisation software: Oracle VirtualBox ( o Guest OS: Ubuntu Server ( (or Ubuntu 10.04) Prototype software (Archive: D1.2.2.tar.gz from Basecamp), including o Cluster Resources MAUI v (patched version) o Cluster Resources Torque v o o o o o Creation of VM MAUI configuration file: maui.cfg Torque configuration file: torque.cfg demo scripts and applications a video of running prototype HDFS (Hadoop Yahoo distribution 20.2, hadoop.apache.org or Seven virtual machines should be created. The names listed below are fixed, as they are part of the prototype configuration: hpc1, hpc2, hpc3, hpc4, storage, hdfs1, hdfs2. The OS on the VMs do not require a GUI, and therefore have modest requirements: one CPU core and 384MB of RAM per virtual machine are sufficient. VirtualBox virtualization software provides graphical interface which makes it easy to create VM s quickly. During the creation most parameters can be left unchanged. During the VM creation the network interfaces must be set correctly. We chose to use two network interfaces per virtual machine. The first one gives access to the Internet (security updates, FP
10 software repositories). The second interface connects the VMs together using a virtual network and makes them accessible from the host. In terms of VirtualBox configuration, two network interfaces should be enabled during the VM generation: the first one will be a NAT (Network Address Translation)for Internet access, the second is a Host-Only-Adapter for the virtual network connection with other VMs and the host. The OS must then be installed. The installation of the Ubuntu Server is very straightforward, and it takes only few minutes. During installation, all the default options can be accepted. When asked, create the default username proto, and mark the OpenSSH package for installation. It is possible to speed up the whole process by cloning the virtual hard drive of the first VM and plugging it into the other six. It is important to remember to change their hostnames. For more information on cloning, please refer to the VirtualBox documentation [5, chapter 5]. Virtual network setup. This step assumes that all the VMs have been implemented, the OS has been installed in each of them and that the user proto is enabled and has rights to execute administrator s tasks (sudo). The network configuration requires editing two configuration files. The IP addresses are static for the virtual cluster network, and dynamic for the access to the Internet. The static addresses were chosen arbitrarily and they can be changed if necessary. The following section can be appended to each VM /etc/hosts file. // Prototype hostnames mapping hpc hpc hpc hpc hdfs hdfs storage The VirtualBox settings of each VM define the first network interface as NAT and the second as Host- Only-Adapter and the Linux kernel in the guest OS should assign to them eth0 and eth1 descriptors, respectively. The following section can be added to the /etc/network/interfaces file of every VM operating system if required. It may be necessary to substitute the IP address field for each VM to avoid any address conflicts. // The primary (NAT) network interface auto eth0 iface eth0 inet dhcp // The secondary (virtual network) interface auto eth1 iface eth1 inet static address netmask FP
11 After VMs are rebooted, the network should be up and running. In order to verify both interfaces, try to ping a machine on the internet and the host virtual network interface. It is suggested to enable SSH access on all nodes for the user proto, so that the nodes can be accessed from the host. By default the SSH service is enabled if the OpenSSH software was marked for installation during OS installation and we suggest to enable password-less SSH access from the host to all nodes. Instructions on how to do it can be found in the last section of this chapter, which discusses the installation of Data System software. This step completes the first stage of the installation. The Virtual cluster can be started and it is possible to connect to any VM machine from the host machine using SSH. In the next step, we are going to install the required software packages on the virtual nodes. Head-Node: hpc1 The hpc1 node is the head-node. It requires the following software packages to be installed: MAUI scheduler Torque server module Please use the MAUI installation package provided with this deliverable and not the one available on the website above: we have applied a required patch to the package we provided. MAUI Copy, unpack and install the file: maui-{version}.tar.gz, using the following commands: >tar xzvf maui-*.tar.gz >cd maui-* >./configure >make >make install Copy the file maui.cfg from the installation directory to the following location on the hpc1 node: maui.cfg -> (hpc1) /var/spool/maui/maui.cfg The MAUI installation and troubleshooting documentation can be found in [2]. Torque Copy, unpack and install the file: torque-{version}.tar.gz, using the following commands: >tar xzvf torque-*.tar.gz >cd torque-* >./configure >make >sudo make install The installation of Torque server module creates /var/spool/torque directory where configuration files are placed. From the installation folder torque files copy the following file: nodes -> (hpc1) /var/spool/torque/server_priv/nodes Set up Torque submission queues by copying and installing torque configuration file: torque.cfg -> (hpc1) ~/torque.cfg FP
12 >sudo qmgr < torque.cfg Torque installation and troubleshooting documentation can be found in [3]. Compute-Nodes: hpc1, hpc2, hpc3, hpc4 Torque The nodes hpc1,, hpc4 are compute nodes. Notice that hpc1 node is also the head-node. The compute nodes require the following software installed: Torque compute node module NFS client The Torque module will allow compute nodes to communicate with the head-node. To install the Torque compute node module login in the hpc1 node, enter the Torque installation director and create compute module installation packages: >cd torque-* >make packages It will generate a list of Torque packages. Copy and install the following packages on all four compute nodes: >sudo torque-package-mom-linux-i686.sh --install >sudo torque-package-clients-linux-i686.sh --install To enable Torque to run as a service, execute the following commands: >sudo cp contrib/init.d/debian.pbs_mom /etc/init.d/pbs_mom >update-rc.d pbs_mom defaults From the installation folder torque files copy the following file: server_name -> (hpc1,, hpc4) /var/spool/torque/server_name config -> (hpc1,, hpc4) /var/spool/torque/mom_priv/config NFS Client Compute nodes require the following software: NFS Client To install NFS Client on Ubuntu virtual machine, follow the following tutorial [6]. On each compute node: hpc1,, hpc4 create ~/shared/demo folder for the user proto. This folder should be mapped to the exported folder on the Fast Storage System (see next section). Fast Storage node: storage In the prototype dedicated node storage is used as a Fast Storage node and it requires: NFS Server Login to the storage node and install the NFS server following the tutorial [6]. FP
13 Create shared folder: >mkdir -p ~/storage/demo Export the folder as a shared folder, by adding the following entry to the /etc/fstab file: >/home/proto/storage/ /export/storage none bind 0 0 Verify whether exported folder can be accessed from all compute nodes. Data System nodes: hdfs1, hdfs2 The Data System requires installation of: HDFS (Hadoop Distributed File System) The HDFS package can be obtained from the Cloudera website under the following link: The HDFS has few prerequisites. It requires the JDK6 and SSH packages. The details of installing JDK6 are not addressed here as they are described in the link above. SSH setup requires password-less type of access between two machines. Setting up password-less access requires two steps: generating the key for the user, and copying this key to a remote machine for subsequent password-less access. To generate an encryption key, execute on both machines hdfs1 and hdfs2 the following command: >ssh-keygen t rsa When asked, confirm default parameters. To enable password-less SSH access between hdfs1 and hdfs2 machines, run the following commands on the nodes: On hdfs1 node: >ssh-copy-id I ~/.ssh/id_rsa.pub proto@hdfs2 On hdfs2 node: >ssh-copy-id I ~/.ssh/id_rsa.pub proto@hdfs1 Before the HDFS cluster can be used, it should be formatted. A list of useful HDFS commands, including the command to format the HDFS system is presented in the next chapter. FP
14 4. Testing the Prototype DNO-like model applications The prototype contains three model DNO applications as they are studied in the HiPerDNO project: distributed state estimation (DSE), condition-monitoring (CM), data-mining (DM). These applications are different with respect to criticality, execution pattern (periodic, aperiodic) and required compute resources. The proposed characteristics are summarized in table 2 were chosen as an example scenario. At the same time they are based on the input provided by the partners in the project. DNO Application DSE CM DM Scheduling characteristics mission-critical, periodic, short submission interval, parallel High-priority, periodic, long submission interval, batch of serial tasks non-critical w.r.t timely results, aperiodic, long-execution time Table 2. Model DNO application, execution requirements. The DSE is a periodic job with a short submission interval (few minutes; for the purpose of this prototype shortened significantly). It is a parallel task requiring a number of compute nodes and it is a mission-critical task, thus requiring immediate execution at the moment of submission. The CM is a periodic task with long submission intervals. The CM submission consists of a batch of small serial jobs submitted together at regular intervals. CM jobs are high priority and are executed on a limited number of compute nodes The DM jobs are not periodic; they are serial tasks, submitted at arbitrary intervals, requiring long execution times. They are not critical jobs from the DNO point of view and can be executed when the system load due to higher priority jobs allows. Thus their execution can be postponed. The model DM applications includes checkpointing/warm restarting, thus allowing DM jobs to pre-empted and rescheduled (kill-and-requeue). Running the example In the previous chapter HPC hardware infrastructure and software components (Torque, MAUI, NFS, HDFS) were set up. Before the prototype can be run the DNO model applications and submission scripts must be deployed from the host, as well as basic data layout in the DSS. Start the prototype by executing the following script on the host: >./startvms When all virtual machines are running, deploy all scripts and DNO model applications from the host by running: >./deploy The prototype can be shut down (later) by running the following command from the host: >./stopvms FP
15 Open the terminal window and log into the hdfs1 node. Create data folders for DNO applications: >hadoop dfs mkdir DSEIn >hadoop dfs mkdir DSEOut >hadoop dfs mkdir CMIn >hadoop dfs mkdir CMOut >hadoop dfs mkdir DMIn >hadoop dfs mkdir DMOut Next, generate example input files for DNO applications by running: >./set.py DSE 10 >./set.py CM 2 >./set.py DM 500 These commands will create the input files needed by applications running in the prototype. The second parameter behind the name of the application (DSE, CM, DM) specifies the length of its input file and effectively the execution time of the application. Run the Data Store server script: >./ds.py The script is a part of the client-server interface, which sits on top of the Data System. Open the terminal window and log into the node hpc1. Run the following script: >./monitor.sh Open the second terminal window and log into the node storage. Run the following script: >./filemon.sh The two terminal windows will show scheduling and checkpoint information accordingly. Open another terminal window and login into hpc1 node. Run the prototype demonstration with the following command: >./demo.sh The demo takes a couple of minutes to execute. In a terminal window with scheduling information, one can see which jobs are submitted but waiting for execution, which are executing and on which nodes. The second terminal window shows how checkpoint files are created. Both terminal windows are shown in figure 2, which presents a snapshot from the running prototype. We can see that five jobs are present in the HPC system: four DM jobs and one DSE job. Three DM jobs are not running because of the DSE job pre-emption. Only one DM job is in execution. In the right terminal window we can see that new checkpoint files are being generated for the DM job running. FP
16 Figure 2. HPC Prototype demo. Other key commands (HDFS, Torque, MAUI) In order to manage the prototype the following commands may prove to be useful. More complete list of commands to manage HDFS, Torque, MAUI can be found through the references [2-4]. HDFS The basic operations on HDFs include formatting the HDFS file system, copying, listing, and removing the files. To format the HDFS, using the following command: >sudo hadoop namenode format To display the list of files in the HDFS directory issue the following command: >hadoop dfs ls URI To remove a file from the HDFS, issue the following command: >hadoop dfs rm URI To display the content of the file onto a terminal, issue the following command: >hadoop dfs cat URI To copy a file from the local file system to HDFS, use the following command: >haddop dfs copyfromlocal {localsrc} URI To copy a file from HDFS to a local file system, use the following command: >hadoop dfs copytolocal URI <localdst> For the complete list of HDFS commands please refer to [4]. Torque By default, Torque starts its standard PBS scheduler when the head node boots up. The PBS scheduler is not part of the prototype and it should be stopped manually, before enabling MAUI (see the next section). To stop the PBS scheduler, type the following command on the hpc1 front node: >sudo pkill pbs_sched How to start MAUI scheduler is described further on. In order to verify whether the HPC compute nodes FP
17 are running, use the following command on the hpc1 front node: >pbsnodes In the output of this command check if for each node the state is set to free. It if is, it means the node is up and running and waiting for jobs to be submitted. In order to check the list of submitted jobs use the following command on the head node HPC1: >qstat The command will display the list of jobs submitted to the resource manager, together with their state. MAUI The key configuration tool for the MAUI scheduler is its configuration file. It is not the purpose of this document to go in depth on the configuration parameters of the MAUI schedulers. For those interested, refer to [2]. The configuration file maui.cfg is provided together with this deliverable and it should be copied to appropriate location on the HPC1 head node before starting MAUI scheduler. The location for MAUI configuration file is: /var/spool/maui/maui.cfg. Make sure that the default PBS scheduler is not running (see previous section). To start MAUI use the following command on the HPC1 head node: >maui Editing schedules The prototype is shipped with an example scenario demo.sh. It is possible to submit CM, DM, DSE jobs without demo.sh script. In order to submit CM or DM type: >./submit.sh DM >./submit.sh CM To submit DSE, one can specify additionally how many nodes DSE job would require (max 3). >./submit.sh DSE 2 >./submit.sh DSE 3 To influence the runtime of the jobs modify the size of the input file stored in the Data Store (see section Data System setup in the previous chapter). For example to shorten by half the execution time of the DSE execute the following command on the hdfs1: >./set.py DSE 5 FP
18 5. From Prototype to System Deployment The HPC Prototype described in this document lays the ground for the work ahead in the WP1.2 and other work packages in the HiPerDNO project. We are planning to deploy the current HPC prototype on real hardware. This deployment will be made available to partners in the HiPerDNO project. Based on initial discussions within the OeRC team we will make available an Intel-based cluster consisting of seven or eight dual-cpu nodes. It will be configured so as to allow both the development and tuning of real applications as well as study of the overall HPC System under realistic deployment conditions. We plan to use Rocks ( to ease the deployment and maintenance of the system, and interconnects adequate to the task in hand (hopefully, 10 Gbit Ethernet). Work on the deployment will commence in August and we plan to make it available to all partners in September. We are also starting collaborations with various HiPerDNO partners (EDF, BU, Indra/UF) with the aim of porting to and integrating their algorithms on the HPC platform. More information can be obtained from the forthcoming HiPerDNO deliverable ([7]). FP
19 6. References 1. S. Salvini, P. Lopatka, D. Wallom, HiPerDNO Deliverable D Report on the architecture and performance criteria used for selection. 2. Adaptive Computing Enterprises, Maui Scheduler Administrator s Guide version 3.2, 3. Cluster Resources, TORQUE Admin Manual version 3.0, 4. The Apache Software Foundation, HDFS User Guide., 5. Oracle Corporation, Oracle VM VirtualBox, 6. Ubuntu documentation, Setting up NFS HowTo, 7. HiPerDNO project deliverable, D3.1.1, Detailed specification report on HPC architecture and platform standardisation to support the development of novel DMS functionality FP
CDH installation & Application Test Report
CDH installation & Application Test Report He Shouchun (SCUID: 00001008350, Email: she@scu.edu) Chapter 1. Prepare the virtual machine... 2 1.1 Download virtual machine software... 2 1.2 Plan the guest
More informationQuick Deployment Step-by-step instructions to deploy Oracle Big Data Lite Virtual Machine
Quick Deployment Step-by-step instructions to deploy Oracle Big Data Lite Virtual Machine Version 3.0 Please note: This appliance is for testing and educational purposes only; it is unsupported and not
More informationHow To Install An Org Vm Server On A Virtual Box On An Ubuntu 7.1.3 (Orchestra) On A Windows Box On A Microsoft Zephyrus (Orroster) 2.5 (Orner)
Oracle Virtualization Installing Oracle VM Server 3.0.3, Oracle VM Manager 3.0.3 and Deploying Oracle RAC 11gR2 (11.2.0.3) Oracle VM templates Linux x86 64 bit for test configuration In two posts I will
More informationUsing VirtualBox ACHOTL1 Virtual Machines
Using VirtualBox ACHOTL1 Virtual Machines The steps in the Apache Cassandra Hands-On Training Level One courseware book were written using VMware as the virtualization technology. Therefore, it is recommended
More informationCloud.com CloudStack Community Edition 2.1 Beta Installation Guide
Cloud.com CloudStack Community Edition 2.1 Beta Installation Guide July 2010 1 Specifications are subject to change without notice. The Cloud.com logo, Cloud.com, Hypervisor Attached Storage, HAS, Hypervisor
More informationCS380 Final Project Evaluating the Scalability of Hadoop in a Real and Virtual Environment
CS380 Final Project Evaluating the Scalability of Hadoop in a Real and Virtual Environment James Devine December 15, 2008 Abstract Mapreduce has been a very successful computational technique that has
More informationRally Installation Guide
Rally Installation Guide Rally On-Premises release 2015.1 rallysupport@rallydev.com www.rallydev.com Version 2015.1 Table of Contents Overview... 3 Server requirements... 3 Browser requirements... 3 Access
More informationCreate a virtual machine at your assigned virtual server. Use the following specs
CIS Networking Installing Ubuntu Server on Windows hyper-v Much of this information was stolen from http://www.isummation.com/blog/installing-ubuntu-server-1104-64bit-on-hyper-v/ Create a virtual machine
More informationAPPLICATION NOTE. How to build pylon applications for ARM
APPLICATION NOTE Version: 01 Language: 000 (English) Release Date: 31 January 2014 Application Note Table of Contents 1 Introduction... 2 2 Steps... 2 1 Introduction This document explains how pylon applications
More informationHP SDN VM and Ubuntu Setup
HP SDN VM and Ubuntu Setup Technical Configuration Guide Version: 1 September 2013 Table of Contents Introduction... 2 Option 1: VirtualBox Preconfigured Setup... 2 Option 2: VMware Setup (from scratch)...
More informationBuilding a Private Cloud Cloud Infrastructure Using Opensource
Cloud Infrastructure Using Opensource with Ubuntu Server 10.04 Enterprise Cloud (Eucalyptus) OSCON (Note: Special thanks to Jim Beasley, my lead Cloud Ninja, for putting this document together!) Introduction
More informationBuilding a Penetration Testing Virtual Computer Laboratory
Building a Penetration Testing Virtual Computer Laboratory User Guide 1 A. Table of Contents Collaborative Virtual Computer Laboratory A. Table of Contents... 2 B. Introduction... 3 C. Configure Host Network
More informationOracle EXAM - 1Z0-102. Oracle Weblogic Server 11g: System Administration I. Buy Full Product. http://www.examskey.com/1z0-102.html
Oracle EXAM - 1Z0-102 Oracle Weblogic Server 11g: System Administration I Buy Full Product http://www.examskey.com/1z0-102.html Examskey Oracle 1Z0-102 exam demo product is here for you to test the quality
More informationinstallation administration and monitoring of beowulf clusters using open source tools
ation administration and monitoring of beowulf clusters using open source tools roger goff senior system architect hewlett-packard company roger_goff@hp.com (970)898-4719 FAX (970)898-6787 dr. randy splinter
More informationLecture 2 (08/31, 09/02, 09/09): Hadoop. Decisions, Operations & Information Technologies Robert H. Smith School of Business Fall, 2015
Lecture 2 (08/31, 09/02, 09/09): Hadoop Decisions, Operations & Information Technologies Robert H. Smith School of Business Fall, 2015 K. Zhang BUDT 758 What we ll cover Overview Architecture o Hadoop
More informationInstalling and Administering VMware vsphere Update Manager
Installing and Administering VMware vsphere Update Manager Update 1 vsphere Update Manager 5.1 This document supports the version of each product listed and supports all subsequent versions until the document
More informationPenetration Testing LAB Setup Guide
Penetration Testing LAB Setup Guide (Internal Attacker - Beginner version) By: magikh0e - magikh0e@ihtb.org Last Edit: July 07 2012 This guide assumes a few things... 1. You have installed Backtrack before
More informationPFSENSE Load Balance with Fail Over From Version Beta3
PFSENSE Load Balance with Fail Over From Version Beta3 Following are the Installation instructions of PFSense beginning at first Login to setup Load Balance and Fail over procedures for outbound Internet
More informationCloud Implementation using OpenNebula
Cloud Implementation using OpenNebula Best Practice Document Produced by the MARnet-led working group on campus networking Authors: Vasko Sazdovski (FCSE/MARnet), Boro Jakimovski (FCSE/MARnet) April 2016
More informationHSearch Installation
To configure HSearch you need to install Hadoop, Hbase, Zookeeper, HSearch and Tomcat. 1. Add the machines ip address in the /etc/hosts to access all the servers using name as shown below. 2. Allow all
More informationProduct Version 1.0 Document Version 1.0-B
VidyoDashboard Installation Guide Product Version 1.0 Document Version 1.0-B Table of Contents 1. Overview... 3 About This Guide... 3 Prerequisites... 3 2. Installing VidyoDashboard... 5 Installing the
More informationHadoop Basics with InfoSphere BigInsights
An IBM Proof of Technology Hadoop Basics with InfoSphere BigInsights Unit 4: Hadoop Administration An IBM Proof of Technology Catalog Number Copyright IBM Corporation, 2013 US Government Users Restricted
More informationReport on virtualisation technology as used at the EPO for Online Filing software testing
Report on virtualisation technology as used at the EPO for Online Filing software testing Virtualisation technology lets one computer do the job of multiple computers, all sharing the resources - including
More informationWork Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015
Work Environment David Tur HPC Expert HPC Users Training September, 18th 2015 1. Atlas Cluster: Accessing and using resources 2. Software Overview 3. Job Scheduler 1. Accessing Resources DIPC technicians
More informationSetting up VNC, SAMBA and SSH on Ubuntu Linux PCs Getting More Benefit out of Your Local Area Network
What Are These Programs? VNC (Virtual Network Computing) is a networking application that allows one computer's screen to be viewed by, and optionally controlled by one or more other computers through
More informationA Study of Data Management Technology for Handling Big Data
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 9, September 2014,
More informationAspen Cloud Server Management Console
Aspen Cloud Server Management Console Management of Cloud Server Resources Power All Networks Ltd. User Guide June 2011, version 1.1.1 Refer to ICP V1.1 PAGE 1 Table of Content 1. Introduction... 4 2.
More informationHadoop Data Warehouse Manual
Ruben Vervaeke & Jonas Lesy 1 Hadoop Data Warehouse Manual To start off, we d like to advise you to read the thesis written about this project before applying any changes to the setup! The thesis can be
More informationSingle Node Hadoop Cluster Setup
Single Node Hadoop Cluster Setup This document describes how to create Hadoop Single Node cluster in just 30 Minutes on Amazon EC2 cloud. You will learn following topics. Click Here to watch these steps
More informationModule I-7410 Advanced Linux FS-11 Part1: Virtualization with KVM
Bern University of Applied Sciences Engineering and Information Technology Module I-7410 Advanced Linux FS-11 Part1: Virtualization with KVM By Franz Meyer Version 1.0 February 2011 Virtualization Architecture
More informationLinux Terminal Server Project
Linux Terminal Server Project Tested by : C.V. UDAYASANKAR mail id: udayasankar.0606@gmail.com The Linux Terminal Server Project adds thin client support to Linux servers. It allows you to set up a diskless
More informationHOWTO: Set up a Vyatta device with ThreatSTOP in router mode
HOWTO: Set up a Vyatta device with ThreatSTOP in router mode Overview This document explains how to set up a minimal Vyatta device in a routed configuration and then how to apply ThreatSTOP to it. It is
More informationDeploy and Manage Hadoop with SUSE Manager. A Detailed Technical Guide. Guide. Technical Guide Management. www.suse.com
Deploy and Manage Hadoop with SUSE Manager A Detailed Technical Guide Guide Technical Guide Management Table of Contents page Executive Summary.... 2 Setup... 3 Networking... 4 Step 1 Configure SUSE Manager...6
More informationAn Oracle White Paper July 2012. Oracle VM 3: Building a Demo Environment using Oracle VM VirtualBox
An Oracle White Paper July 2012 Oracle VM 3: Building a Demo Environment using Oracle VM VirtualBox Introduction... 1 Overview... 2 The Concept... 2 The Process Flow... 3 What You Need to Get Started...
More informationProcedure to Create and Duplicate Master LiveUSB Stick
Procedure to Create and Duplicate Master LiveUSB Stick A. Creating a Master LiveUSB stick using 64 GB USB Flash Drive 1. Formatting USB stick having Linux partition (skip this step if you are using a new
More informationCloudera Distributed Hadoop (CDH) Installation and Configuration on Virtual Box
Cloudera Distributed Hadoop (CDH) Installation and Configuration on Virtual Box By Kavya Mugadur W1014808 1 Table of contents 1.What is CDH? 2. Hadoop Basics 3. Ways to install CDH 4. Installation and
More informationCycleServer Grid Engine Support Install Guide. version 1.25
CycleServer Grid Engine Support Install Guide version 1.25 Contents CycleServer Grid Engine Guide 1 Administration 1 Requirements 1 Installation 1 Monitoring Additional OGS/SGE/etc Clusters 3 Monitoring
More informationHDFS Users Guide. Table of contents
Table of contents 1 Purpose...2 2 Overview...2 3 Prerequisites...3 4 Web Interface...3 5 Shell Commands... 3 5.1 DFSAdmin Command...4 6 Secondary NameNode...4 7 Checkpoint Node...5 8 Backup Node...6 9
More informationHow to Create, Setup, and Configure an Ubuntu Router with a Transparent Proxy.
In this tutorial I am going to explain how to setup a home router with transparent proxy using Linux Ubuntu and Virtualbox. Before we begin to delve into the heart of installing software and typing in
More informationA technical whitepaper describing steps to setup a Private Cloud using the Eucalyptus Private Cloud Software and Xen hypervisor.
A technical whitepaper describing steps to setup a Private Cloud using the Eucalyptus Private Cloud Software and Xen hypervisor. Vivek Juneja Cloud Computing COE Torry Harris Business Solutions INDIA Contents
More informationINUVIKA OVD INSTALLING INUVIKA OVD ON UBUNTU 14.04 (TRUSTY TAHR)
INUVIKA OVD INSTALLING INUVIKA OVD ON UBUNTU 14.04 (TRUSTY TAHR) Mathieu SCHIRES Version: 0.9.1 Published December 24, 2014 http://www.inuvika.com Contents 1 Prerequisites: Ubuntu 14.04 (Trusty Tahr) 3
More information"Charting the Course...... to Your Success!" MOC 50290 A Understanding and Administering Windows HPC Server 2008. Course Summary
Description Course Summary This course provides students with the knowledge and skills to manage and deploy Microsoft HPC Server 2008 clusters. Objectives At the end of this course, students will be Plan
More informationCactoScale Guide User Guide. Athanasios Tsitsipas (UULM), Papazachos Zafeirios (QUB), Sakil Barbhuiya (QUB)
CactoScale Guide User Guide Athanasios Tsitsipas (UULM), Papazachos Zafeirios (QUB), Sakil Barbhuiya (QUB) Version History Version Date Change Author 0.1 12/10/2014 Initial version Athanasios Tsitsipas(UULM)
More informationApache Hadoop 2.0 Installation and Single Node Cluster Configuration on Ubuntu A guide to install and setup Single-Node Apache Hadoop 2.
EDUREKA Apache Hadoop 2.0 Installation and Single Node Cluster Configuration on Ubuntu A guide to install and setup Single-Node Apache Hadoop 2.0 Cluster edureka! 11/12/2013 A guide to Install and Configure
More informationVirtual Appliance Setup Guide
Virtual Appliance Setup Guide 2015 Bomgar Corporation. All rights reserved worldwide. BOMGAR and the BOMGAR logo are trademarks of Bomgar Corporation; other trademarks shown are the property of their respective
More informationComodo MyDLP Software Version 2.0. Installation Guide Guide Version 2.0.010215. Comodo Security Solutions 1255 Broad Street Clifton, NJ 07013
Comodo MyDLP Software Version 2.0 Installation Guide Guide Version 2.0.010215 Comodo Security Solutions 1255 Broad Street Clifton, NJ 07013 Table of Contents 1.About MyDLP... 3 1.1.MyDLP Features... 3
More informationULTEO OPEN VIRTUAL DESKTOP UBUNTU 12.04 (PRECISE PANGOLIN) SUPPORT
ULTEO OPEN VIRTUAL DESKTOP V4.0.2 UBUNTU 12.04 (PRECISE PANGOLIN) SUPPORT Contents 1 Prerequisites: Ubuntu 12.04 (Precise Pangolin) 3 1.1 System Requirements.............................. 3 1.2 sudo.........................................
More informationVirtual Managment Appliance Setup Guide
Virtual Managment Appliance Setup Guide 2 Sophos Installing a Virtual Appliance Installing a Virtual Appliance As an alternative to the hardware-based version of the Sophos Web Appliance, you can deploy
More informationDeploying Cloudera CDH (Cloudera Distribution Including Apache Hadoop) with Emulex OneConnect OCe14000 Network Adapters
Deploying Cloudera CDH (Cloudera Distribution Including Apache Hadoop) with Emulex OneConnect OCe14000 Network Adapters Table of Contents Introduction... Hardware requirements... Recommended Hadoop cluster
More informationUser Manual of the Pre-built Ubuntu 12.04 Virutal Machine
SEED Labs 1 User Manual of the Pre-built Ubuntu 12.04 Virutal Machine Copyright c 2006-2014 Wenliang Du, Syracuse University. The development of this document is/was funded by three grants from the US
More informationPenetration Testing LAB Setup Guide
Penetration Testing LAB Setup Guide (External Attacker - Intermediate) By: magikh0e - magikh0e@ihtb.org Last Edit: July 06 2012 This guide assumes a few things... 1. You have read the basic guide of this
More informationIntegrating SAP BusinessObjects with Hadoop. Using a multi-node Hadoop Cluster
Integrating SAP BusinessObjects with Hadoop Using a multi-node Hadoop Cluster May 17, 2013 SAP BO HADOOP INTEGRATION Contents 1. Installing a Single Node Hadoop Server... 2 2. Configuring a Multi-Node
More informationOpenCPN Garmin Radar Plugin
OpenCPN Garmin Radar Plugin Hardware Interface The Garmin Radar PlugIn for OpenCPN requires a specific hardware interface in order to allow the OpenCPN application to access the Ethernet data captured
More informationDeploying Business Virtual Appliances on Open Source Cloud Computing
International Journal of Computer Science and Telecommunications [Volume 3, Issue 4, April 2012] 26 ISSN 2047-3338 Deploying Business Virtual Appliances on Open Source Cloud Computing Tran Van Lang 1 and
More informationA SHORT INTRODUCTION TO BITNAMI WITH CLOUD & HEAT. Version 1.12 2014-07-01
A SHORT INTRODUCTION TO BITNAMI WITH CLOUD & HEAT Version 1.12 2014-07-01 PAGE _ 2 TABLE OF CONTENTS 1. Introduction.... 3 2. Logging in to Cloud&Heat Dashboard... 4 2.1 Overview of Cloud&Heat Dashboard....
More informationTesting New Applications In The DMZ Using VMware ESX. Ivan Dell Era Software Engineer IBM
Testing New Applications In The DMZ Using VMware ESX Ivan Dell Era Software Engineer IBM Agenda Problem definition Traditional solution The solution with VMware VI Remote control through the firewall Problem
More informationHow to Backup and Restore a VM using Veeam
How to Backup and Restore a VM using Veeam Table of Contents Introduction... 3 Assumptions... 3 Add ESXi Server... 4 Backup a VM... 6 Restore Full VM... 12 Appendix A: Install Veeam Backup & Replication
More informationVMware View Design Guidelines. Russel Wilkinson, Enterprise Desktop Solutions Specialist, VMware
VMware View Design Guidelines Russel Wilkinson, Enterprise Desktop Solutions Specialist, VMware 1 2 Overview Steps to follow: Getting from concept to reality Design process: Optimized for efficiency Best
More informationPrivate Cloud in Educational Institutions: An Implementation using UEC
Private Cloud in Educational Institutions: An Implementation using UEC D. Sudha Devi L.Yamuna Devi K.Thilagavathy,Ph.D P.Aruna N.Priya S. Vasantha,Ph.D ABSTRACT Cloud Computing, the emerging technology,
More informationData Analytics. CloudSuite1.0 Benchmark Suite Copyright (c) 2011, Parallel Systems Architecture Lab, EPFL. All rights reserved.
Data Analytics CloudSuite1.0 Benchmark Suite Copyright (c) 2011, Parallel Systems Architecture Lab, EPFL All rights reserved. The data analytics benchmark relies on using the Hadoop MapReduce framework
More informationVirtual Web Appliance Setup Guide
Virtual Web Appliance Setup Guide 2 Sophos Installing a Virtual Appliance Installing a Virtual Appliance This guide describes the procedures for installing a Virtual Web Appliance. If you are installing
More informationHADOOP - MULTI NODE CLUSTER
HADOOP - MULTI NODE CLUSTER http://www.tutorialspoint.com/hadoop/hadoop_multi_node_cluster.htm Copyright tutorialspoint.com This chapter explains the setup of the Hadoop Multi-Node cluster on a distributed
More informationChase Wu New Jersey Ins0tute of Technology
CS 698: Special Topics in Big Data Chapter 4. Big Data Analytics Platforms Chase Wu New Jersey Ins0tute of Technology Some of the slides have been provided through the courtesy of Dr. Ching-Yung Lin at
More informationVirtual CD v10. Network Management Server Manual. H+H Software GmbH
Virtual CD v10 Network Management Server Manual H+H Software GmbH Table of Contents Table of Contents Introduction 1 Legal Notices... 2 What Virtual CD NMS can do for you... 3 New Features in Virtual
More informationNEFSIS DEDICATED SERVER
NEFSIS TRAINING SERIES Nefsis Dedicated Server version 5.2.0.XXX (DRAFT Document) Requirements and Implementation Guide (Rev5-113009) REQUIREMENTS AND INSTALLATION OF THE NEFSIS DEDICATED SERVER Nefsis
More informationOptions in Open Source Virtualization and Cloud Computing. Andrew Hadinyoto Republic Polytechnic
Options in Open Source Virtualization and Cloud Computing Andrew Hadinyoto Republic Polytechnic No Virtualization Application Operating System Hardware Virtualization (general) Application Application
More informationThe Maui High Performance Computing Center Department of Defense Supercomputing Resource Center (MHPCC DSRC) Hadoop Implementation on Riptide - -
The Maui High Performance Computing Center Department of Defense Supercomputing Resource Center (MHPCC DSRC) Hadoop Implementation on Riptide - - Hadoop Implementation on Riptide 2 Table of Contents Executive
More informationDeploying a Virtual Machine (Instance) using a Template via CloudStack UI in v4.5.x (procedure valid until Oct 2015)
Deploying a Virtual Machine (Instance) using a Template via CloudStack UI in v4.5.x (procedure valid until Oct 2015) Access CloudStack web interface via: Internal access links: http://cloudstack.doc.ic.ac.uk
More informationHadoop Tutorial. General Instructions
CS246: Mining Massive Datasets Winter 2016 Hadoop Tutorial Due 11:59pm January 12, 2016 General Instructions The purpose of this tutorial is (1) to get you started with Hadoop and (2) to get you acquainted
More informationOracle Managed File Getting Started - Transfer FTP Server to File Table of Contents
Oracle Managed File Getting Started - Transfer FTP Server to File Table of Contents Goals... 3 High- Level Steps... 4 Basic FTP to File with Compression... 4 Steps in Detail... 4 MFT Console: Login and
More informationAbout the VM-Series Firewall
About the VM-Series Firewall Palo Alto Networks VM-Series Deployment Guide PAN-OS 6.0 Contact Information Corporate Headquarters: Palo Alto Networks 4401 Great America Parkway Santa Clara, CA 95054 http://www.paloaltonetworks.com/contact/contact/
More informationIntellicus Enterprise Reporting and BI Platform
Intellicus Cluster and Load Balancer Installation and Configuration Manual Intellicus Enterprise Reporting and BI Platform Intellicus Technologies info@intellicus.com www.intellicus.com Copyright 2012
More informationAddonics T E C H N O L O G I E S. NAS Adapter. Model: NASU2. 1.0 Key Features
1.0 Key Features Addonics T E C H N O L O G I E S NAS Adapter Model: NASU2 User Manual Convert any USB 2.0 / 1.1 mass storage device into a Network Attached Storage device Great for adding Addonics Storage
More informationIntroduction to Big data. Why Big data? Case Studies. Introduction to Hadoop. Understanding Features of Hadoop. Hadoop Architecture.
Big Data Hadoop Administration and Developer Course This course is designed to understand and implement the concepts of Big data and Hadoop. This will cover right from setting up Hadoop environment in
More informationINUVIKA TECHNICAL GUIDE
--------------------------------------------------------------------------------------------------- INUVIKA TECHNICAL GUIDE ENTERPRISE EVALUATION GUIDE OVD Enterprise External Document Version 1.1 Published
More informationIDS 561 Big data analytics Assignment 1
IDS 561 Big data analytics Assignment 1 Due Midnight, October 4th, 2015 General Instructions The purpose of this tutorial is (1) to get you started with Hadoop and (2) to get you acquainted with the code
More informationKeyword: YARN, HDFS, RAM
Volume 4, Issue 11, November 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Big Data and
More informationBig Data Operations Guide for Cloudera Manager v5.x Hadoop
Big Data Operations Guide for Cloudera Manager v5.x Hadoop Logging into the Enterprise Cloudera Manager 1. On the server where you have installed 'Cloudera Manager', make sure that the server is running,
More informationBack Up Linux And Windows Systems With BackupPC
By Falko Timme Published: 2007-01-25 14:33 Version 1.0 Author: Falko Timme Last edited 01/19/2007 This tutorial shows how you can back up Linux and Windows systems with BackupPC.
More informationDepartment of Veterans Affairs VistA Integration Adapter Release 1.0.5.0 Enhancement Manual
Department of Veterans Affairs VistA Integration Adapter Release 1.0.5.0 Enhancement Manual Version 1.1 September 2014 Revision History Date Version Description Author 09/28/2014 1.0 Updates associated
More informationNetBoot/SUS Server User Guide. Version 2.0
NetBoot/SUS Server User Guide Version 2.0 JAMF Software, LLC 2013 JAMF Software, LLC. All rights reserved. JAMF Software has made all efforts to ensure that this guide is accurate. JAMF Software 301 4th
More informationRSA Security Analytics Virtual Appliance Setup Guide
RSA Security Analytics Virtual Appliance Setup Guide Copyright 2010-2015 RSA, the Security Division of EMC. All rights reserved. Trademarks RSA, the RSA Logo and EMC are either registered trademarks or
More informationCentral Management System
Central Management System Software Installation Guide Ver. 1.5.0.101115.001 ... ii System Introduction... 3 Client/Server Architecture...3 System Requirements... 4 System Setup...4 Multiple Monitor Configuration...5
More informationUser Manual. Onsight Management Suite Version 5.1. Another Innovation by Librestream
User Manual Onsight Management Suite Version 5.1 Another Innovation by Librestream Doc #: 400075-06 May 2012 Information in this document is subject to change without notice. Reproduction in any manner
More informationIntegration Of Virtualization With Hadoop Tools
Integration Of Virtualization With Hadoop Tools Aparna Raj K aparnaraj.k@iiitb.org Kamaldeep Kaur Kamaldeep.Kaur@iiitb.org Uddipan Dutta Uddipan.Dutta@iiitb.org V Venkat Sandeep Sandeep.VV@iiitb.org Technical
More informationHadoop Installation. Sandeep Prasad
Hadoop Installation Sandeep Prasad 1 Introduction Hadoop is a system to manage large quantity of data. For this report hadoop- 1.0.3 (Released, May 2012) is used and tested on Ubuntu-12.04. The system
More informationFile Transfer Examples. Running commands on other computers and transferring files between computers
Running commands on other computers and transferring files between computers 1 1 Remote Login Login to remote computer and run programs on that computer Once logged in to remote computer, everything you
More informationLSKA 2010 Survey Report Job Scheduler
LSKA 2010 Survey Report Job Scheduler Graduate Institute of Communication Engineering {r98942067, r98942112}@ntu.edu.tw March 31, 2010 1. Motivation Recently, the computing becomes much more complex. However,
More informationSETTING UP A LAMP SERVER REMOTELY
SETTING UP A LAMP SERVER REMOTELY It s been said a million times over Linux is awesome on servers! With over 60 per cent of the Web s servers gunning away on the mighty penguin, the robust, resilient,
More informationPartek Flow Installation Guide
Partek Flow Installation Guide Partek Flow is a web based application for genomic data analysis and visualization, which can be installed on a desktop computer, compute cluster or cloud. Users can access
More informationVirtual machine W4M- Galaxy: Installation guide
Virtual machine W4M- Galaxy: Installation guide Christophe Duperier August, 6 th 2014 v03 This document describes the installation procedure and the functionalities provided by the W4M- Galaxy virtual
More informationSetup Hadoop On Ubuntu Linux. ---Multi-Node Cluster
Setup Hadoop On Ubuntu Linux ---Multi-Node Cluster We have installed the JDK and Hadoop for you. The JAVA_HOME is /usr/lib/jvm/java/jdk1.6.0_22 The Hadoop home is /home/user/hadoop-0.20.2 1. Network Edit
More informationAssignment # 1 (Cloud Computing Security)
Assignment # 1 (Cloud Computing Security) Group Members: Abdullah Abid Zeeshan Qaiser M. Umar Hayat Table of Contents Windows Azure Introduction... 4 Windows Azure Services... 4 1. Compute... 4 a) Virtual
More informationHow to install Apache Hadoop 2.6.0 in Ubuntu (Multi node setup)
How to install Apache Hadoop 2.6.0 in Ubuntu (Multi node setup) Author : Vignesh Prajapati Categories : Hadoop Date : February 22, 2015 Since you have reached on this blogpost of Setting up Multinode Hadoop
More informationEfficient Load Balancing using VM Migration by QEMU-KVM
International Journal of Computer Science and Telecommunications [Volume 5, Issue 8, August 2014] 49 ISSN 2047-3338 Efficient Load Balancing using VM Migration by QEMU-KVM Sharang Telkikar 1, Shreyas Talele
More informationFOG Guide. IPBRICK International. July 17, 2013
FOG Guide IPBRICK International July 17, 2013 1 Copyright c IPBRICK International All rights reserved. The information in this manual is subject to change without prior notice. The presented explanations,
More informationVirtual Appliance Installation Guide
> In This Chapter Document: : Installing the OpenManage Network Manager Virtual Appliance 2 Virtual Appliance Quick Start 2 Start the Virtual Machine 6 Start the Application 7 The Application is Ready
More informationExtending Remote Desktop for Large Installations. Distributed Package Installs
Extending Remote Desktop for Large Installations This article describes four ways Remote Desktop can be extended for large installations. The four ways are: Distributed Package Installs, List Sharing,
More informationInstall Guide for JunosV Wireless LAN Controller
The next-generation Juniper Networks JunosV Wireless LAN Controller is a virtual controller using a cloud-based architecture with physical access points. The current functionality of a physical controller
More informationQuick Deployment: Step-by-step instructions to deploy the SampleApp Virtual Machine v406
Quick Deployment: Step-by-step instructions to deploy the SampleApp Virtual Machine v406 Note: additional supplemental documentation is annotated by Visit us on YouTube at Oracle BI TECHDEMOs for dozens
More information