Large Scale file storage with MogileFS. Stuart Teasdale Lead System Administrator we7 Ltd
|
|
|
- Hope Floyd
- 10 years ago
- Views:
Transcription
1 Large Scale file storage with MogileFS Stuart Teasdale Lead System Administrator we7 Ltd
2 About We7 A web based streaming music service 6.5 million tracks 192kbps and 320kbps mp3s Sending over a gigabit a second of streams at peak
3 Our requirements Store all those mp3s Be able to grab any stream quickly Losing files not acceptable Files being offline not acceptible
4 First Solution Big NFS Server RAID5 Another one in a different datacentre rsync between the two
5 Problems with NFS/RAID Does anyone trust RAID5 any more? Increasing Capacity is tough NFS fragility All eggs in one software and hardware basket
6 Options Big old Sun Fire machine with 48 1TB disks Extension of the previous solution Shift the scaling limit Disk failure handling SANs and NASs Price and power consumption Spread it across lots of disks and machines Adds complexity Allows us to get more from our hardware
7 Distributed Filesystems Two main types: Full POSIX filesystems ocfs2, gfs Need full clustering, lock managers, etc Application filesystems Throw out all that POSIX hassle Make your application do the hard work Examples: HadoopFS, the other GFS, MogileFS
8 MogileFS Application level No single points of failure Automatic file replication Better than RAID Flat namespace Shared-nothing No RAID required Local filesystem agnostic
9 Logical Layout
10 Storage Nodes WebDAV server Built in perlbal, or use lighttpd, nginx, etc Storage Node statistics Hardware needs: Lots of disks Decent net cards Hotswap disk controller useful Leaves plenty of CPU and memory for other tasks
11 Trackers Manages all client communications Parent passes requests to 'query workers' Workers do: Replication Deletion Query Reaper Monitor Parent can load balancer to multiple workers Hardware needs are modest
12 Database Used as metadata store Mysql most mature Postgres and sqllite also available Do HA however you prefer Database size is proportionate to number of files stored, but not huge Hardware needs in line with typical database app
13 Domains, Classes and Files Domains Top level division for the store File keys unique within a Domain Classes Defines groups of files that share replication policies Files Each file in a domain must belong to one class The object we actually store
14 Replication Policies Simple: How many copies (mindevcount) Complex: Where to put them? Hooks to add custom policies
15 Zones and Networks Example of a more complex replication policy Network module makes MogileFS network aware Zones module defines different networks as different zones zone_hex = /24,zone_sov = /24 Replication policy defines where files are stored HostsPerNetwork(sov=2,hex=2)
16 Coping with Failure Device States Alive, Read-Only, Drain, Down, Dead Host States Alive, Down, Dead Dead devices and the Reaper
17 Drain and Rebalance Pre 2.40 Drain removes files from device Basic rebalancing Post 2.40 Complex rebalance policies Drain no longer removes files
18 Checks and Monitoring Internal 'fsck' walks all the files and check: File exists where we expect it to Replication policy is fulfilled We7 does extra integrity checks on a per server basis Monitoring plugins exist for munin and nagios
19 Using MogileFS Command Line Mogadm Manipulate Domains and Classes Add modify and remove hosts and devices Control cluster settings Control the fsck worker Mogstats Show device usage and status Show queue and worker states List files and their replication counts
20 Using MogileFS Command Line Mogtool Add, remove query and manipulate files Deprecated in favour of more unixlike set of tools mogdelete Delete keys from a MogileFS installation mogfetch Fetch data from a MogileFS installation mogfiledebug Dump gobs of information about a FID mogfileinfo Fetch key metadata from a MogileFS installation moglistfids Iterate fid/key data from a MogileFS installation moglistkeys Lists keys out of a MogileFS domain mogupload Upload data to a MogileFS installation
21 Language Bindings Implemented in Perl, so good Perl client library support Java bindings used by we7 Client libraries available for Ruby, PHP and Python
22 Scaling Up Adding storage nodes Add more trackers Read only queries and database slaves Application side caching, e.g. in memcached
23 More Information
24 Questions?
Building Scalable Web Sites: Tidbits from the sites that made it work. Gabe Rudy
: Tidbits from the sites that made it work Gabe Rudy What Is This About Scalable is hot Web startups tend to die or grow... really big Youtube Founded 02/2005. Acquired by Google 11/2006 03/2006 30 million
HADOOP MOCK TEST HADOOP MOCK TEST I
http://www.tutorialspoint.com HADOOP MOCK TEST Copyright tutorialspoint.com This section presents you various set of Mock Tests related to Hadoop Framework. You can download these sample mock tests at
References. Introduction to Database Systems CSE 444. Motivation. Basic Features. Outline: Database in the Cloud. Outline
References Introduction to Database Systems CSE 444 Lecture 24: Databases as a Service YongChul Kwon Amazon SimpleDB Website Part of the Amazon Web services Google App Engine Datastore Website Part of
Introduction to Database Systems CSE 444
Introduction to Database Systems CSE 444 Lecture 24: Databases as a Service YongChul Kwon References Amazon SimpleDB Website Part of the Amazon Web services Google App Engine Datastore Website Part of
An overview of Drupal infrastructure and plans for future growth. prepared by Kieran Lal and Gerhard Killesreiter for the Drupal Association
An overview of Drupal infrastructure and plans for future growth prepared by Kieran Lal and Gerhard Killesreiter for the Drupal Association Drupal.org Old Infrastructure Problems: Web servers not efficiently
Google File System. Web and scalability
Google File System Web and scalability The web: - How big is the Web right now? No one knows. - Number of pages that are crawled: o 100,000 pages in 1994 o 8 million pages in 2005 - Crawlable pages might
Finding a needle in Haystack: Facebook s photo storage IBM Haifa Research Storage Systems
Finding a needle in Haystack: Facebook s photo storage IBM Haifa Research Storage Systems 1 Some Numbers (2010) Over 260 Billion images (20 PB) 65 Billion X 4 different sizes for each image. 1 Billion
Big Data with Component Based Software
Big Data with Component Based Software Who am I Erik who? Erik Forsberg Linköping University, 1998-2003. Computer Science programme + lot's of time at Lysator ACS At Opera Software
Ultimate Guide to Oracle Storage
Ultimate Guide to Oracle Storage Presented by George Trujillo [email protected] George Trujillo Twenty two years IT experience with 19 years Oracle experience. Advanced database solutions such
Welcome to the unit of Hadoop Fundamentals on Hadoop architecture. I will begin with a terminology review and then cover the major components
Welcome to the unit of Hadoop Fundamentals on Hadoop architecture. I will begin with a terminology review and then cover the major components of Hadoop. We will see what types of nodes can exist in a Hadoop
f...-. I enterprise Amazon SimpIeDB Developer Guide Scale your application's database on the cloud using Amazon SimpIeDB Prabhakar Chaganti Rich Helms
Amazon SimpIeDB Developer Guide Scale your application's database on the cloud using Amazon SimpIeDB Prabhakar Chaganti Rich Helms f...-. I enterprise 1 3 1 1 I ; i,acaessiouci' cxperhs;;- diotiilea PUBLISHING
SCALABLE DATA SERVICES
1 SCALABLE DATA SERVICES 2110414 Large Scale Computing Systems Natawut Nupairoj, Ph.D. Outline 2 Overview MySQL Database Clustering GlusterFS Memcached 3 Overview Problems of Data Services 4 Data retrieval
The OpenStack TM Object Storage system
The OpenStack TM Object Storage system Deploying and managing a scalable, open- source cloud storage system with the SwiftStack Platform By SwiftStack, Inc. [email protected] Contents Introduction...
Overview. Big Data in Apache Hadoop. - HDFS - MapReduce in Hadoop - YARN. https://hadoop.apache.org. Big Data Management and Analytics
Overview Big Data in Apache Hadoop - HDFS - MapReduce in Hadoop - YARN https://hadoop.apache.org 138 Apache Hadoop - Historical Background - 2003: Google publishes its cluster architecture & DFS (GFS)
POWER ALL GLOBAL FILE SYSTEM (PGFS)
POWER ALL GLOBAL FILE SYSTEM (PGFS) Defining next generation of global storage grid Power All Networks Ltd. Technical Whitepaper April 2008, version 1.01 Table of Content 1. Introduction.. 3 2. Paradigm
GlusterFS Distributed Replicated Parallel File System
GlusterFS Distributed Replicated Parallel File System SLAC 2011 Martin Alfke Agenda General Information on GlusterFS Architecture Overview GlusterFS Translators GlusterFS
Serving 4 million page requests an hour with Magento Enterprise
1 Serving 4 million page requests an hour with Magento Enterprise Introduction In order to better understand Magento Enterprise s capacity to serve the needs of some of our larger clients, Session Digital
Accelerating and Simplifying Apache
Accelerating and Simplifying Apache Hadoop with Panasas ActiveStor White paper NOvember 2012 1.888.PANASAS www.panasas.com Executive Overview The technology requirements for big data vary significantly
Hypertable Architecture Overview
WHITE PAPER - MARCH 2012 Hypertable Architecture Overview Hypertable is an open source, scalable NoSQL database modeled after Bigtable, Google s proprietary scalable database. It is written in C++ for
WOS Cloud. ddn.com. Personal Storage for the Enterprise. DDN Solution Brief
DDN Solution Brief Personal Storage for the Enterprise WOS Cloud Secure, Shared Drop-in File Access for Enterprise Users, Anytime and Anywhere 2011 DataDirect Networks. All Rights Reserved DDN WOS Cloud
always available Cloud
North Trade Building Noorderlaan 133/8 B-2030 Antwerp T +32 (0) 3 275 01 60 F +32 (0) 3 275 01 69 Kinepolis.com: always available and reachable in the Cloud Since November 2011, the Kinepolis.com infrastructure
Hadoop & its Usage at Facebook
Hadoop & its Usage at Facebook Dhruba Borthakur Project Lead, Hadoop Distributed File System [email protected] Presented at the Storage Developer Conference, Santa Clara September 15, 2009 Outline Introduction
HDFS Users Guide. Table of contents
Table of contents 1 Purpose...2 2 Overview...2 3 Prerequisites...3 4 Web Interface...3 5 Shell Commands... 3 5.1 DFSAdmin Command...4 6 Secondary NameNode...4 7 Checkpoint Node...5 8 Backup Node...6 9
Distributed File Systems
Distributed File Systems Mauro Fruet University of Trento - Italy 2011/12/19 Mauro Fruet (UniTN) Distributed File Systems 2011/12/19 1 / 39 Outline 1 Distributed File Systems 2 The Google File System (GFS)
CLOUD BASED SERVICE (CBS STORAGE)
CLOUD BASED SERVICE (CBS STORAGE) Defining next generation of cloud based grid Power All Networks Ltd. Technical Whitepaper September 2008, version 1.04 PAGE 1 Table of Content Cloud Based Services (CBS
A programming model in Cloud: MapReduce
A programming model in Cloud: MapReduce Programming model and implementation developed by Google for processing large data sets Users specify a map function to generate a set of intermediate key/value
How to Choose your Red Hat Enterprise Linux Filesystem
How to Choose your Red Hat Enterprise Linux Filesystem EXECUTIVE SUMMARY Choosing the Red Hat Enterprise Linux filesystem that is appropriate for your application is often a non-trivial decision due to
Open-Xchange Server High availability 2010-11-06 Daniel Halbe, Holger Achtziger
Open-Xchange Server High availability 2010-11-06 Daniel Halbe, Holger Achtziger Agenda Open-Xchange High availability» Overview» Load Balancing and Web Service» Open-Xchange Server» Filestore» Database»
Portable Scale-Out Benchmarks for MySQL. MySQL User Conference 2008 Robert Hodges CTO Continuent, Inc.
Portable Scale-Out Benchmarks for MySQL MySQL User Conference 2008 Robert Hodges CTO Continuent, Inc. Continuent 2008 Agenda / Introductions / Scale-Out Review / Bristlecone Performance Testing Tools /
HDFS. Hadoop Distributed File System
HDFS Kevin Swingler Hadoop Distributed File System File system designed to store VERY large files Streaming data access Running across clusters of commodity hardware Resilient to node failure 1 Large files
Growing in web environment at XING
Growing in web environment at XING Jens Muecke Nuernberg, 06/24/2010 Content 1. Introduction into XING 2. Why does performance matter? 3. Solving growing pains 4. Why Open Source? Jens Muecke, Nuernberg,
MySQL and Virtualization Guide
MySQL and Virtualization Guide Abstract This is the MySQL and Virtualization extract from the MySQL Reference Manual. For legal information, see the Legal Notices. For help with using MySQL, please visit
Apache Hadoop FileSystem and its Usage in Facebook
Apache Hadoop FileSystem and its Usage in Facebook Dhruba Borthakur Project Lead, Apache Hadoop Distributed File System [email protected] Presented at Indian Institute of Technology November, 2010 http://www.facebook.com/hadoopfs
Lecture 5: GFS & HDFS! Claudia Hauff (Web Information Systems)! [email protected]
Big Data Processing, 2014/15 Lecture 5: GFS & HDFS!! Claudia Hauff (Web Information Systems)! [email protected] 1 Course content Introduction Data streams 1 & 2 The MapReduce paradigm Looking behind
Object storage in Cloud Computing and Embedded Processing
Object storage in Cloud Computing and Embedded Processing Jan Jitze Krol Systems Engineer DDN We Accelerate Information Insight DDN is a Leader in Massively Scalable Platforms and Solutions for Big Data
Survey of Filesystems for Embedded Linux. Presented by Gene Sally CELF
Survey of Filesystems for Embedded Linux Presented by Gene Sally CELF Presentation Filesystems In Summary What is a filesystem Kernel and User space filesystems Picking a root filesystem Filesystem Round-up
SCALABILITY. Hodicska Gergely. email: [email protected] twitter: @felhobacsi. Web Engineering Manager as Ustream. May 7, 2012
SCALABILITY Hodicska Gergely Web Engineering Manager as Ustream email: [email protected] twitter: @felhobacsi SCALABILITY BME 1 DEFINING SCALABILITY It is not: Performance Easier to scale HA It is the ability
Distributed File Systems
Distributed File Systems Paul Krzyzanowski Rutgers University October 28, 2012 1 Introduction The classic network file systems we examined, NFS, CIFS, AFS, Coda, were designed as client-server applications.
An overview of the Drupal infrastructure and plans for future growth
An overview of the Drupal infrastructure and plans for future growth prepared by Kieran Lal, Gerhard Killesreiter, and Drupal infrastructure team for the Drupal Association and the Drupal community Recommendations
Using MySQL for Big Data Advantage Integrate for Insight Sastry Vedantam [email protected]
Using MySQL for Big Data Advantage Integrate for Insight Sastry Vedantam [email protected] Agenda The rise of Big Data & Hadoop MySQL in the Big Data Lifecycle MySQL Solutions for Big Data Q&A
MONGODB - THE NOSQL DATABASE
MONGODB - THE NOSQL DATABASE Akhil Latta Software Engineer Z Systems, Mohali, Punjab MongoDB is an open source document-oriented database system developed and supported by 10gen. It is part of the NoSQL
DISTRIBUTED SYSTEMS [COMP9243] Lecture 9a: Cloud Computing WHAT IS CLOUD COMPUTING? 2
DISTRIBUTED SYSTEMS [COMP9243] Lecture 9a: Cloud Computing Slide 1 Slide 3 A style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet.
Storage Architectures for Big Data in the Cloud
Storage Architectures for Big Data in the Cloud Sam Fineberg HP Storage CT Office/ May 2013 Overview Introduction What is big data? Big Data I/O Hadoop/HDFS SAN Distributed FS Cloud Summary Research Areas
Diagram 1: Islands of storage across a digital broadcast workflow
XOR MEDIA CLOUD AQUA Big Data and Traditional Storage The era of big data imposes new challenges on the storage technology industry. As companies accumulate massive amounts of data from video, sound, database,
Hadoop & its Usage at Facebook
Hadoop & its Usage at Facebook Dhruba Borthakur Project Lead, Hadoop Distributed File System [email protected] Presented at the The Israeli Association of Grid Technologies July 15, 2009 Outline Architecture
Hadoop. Apache Hadoop is an open-source software framework for storage and large scale processing of data-sets on clusters of commodity hardware.
Hadoop Source Alessandro Rezzani, Big Data - Architettura, tecnologie e metodi per l utilizzo di grandi basi di dati, Apogeo Education, ottobre 2013 wikipedia Hadoop Apache Hadoop is an open-source software
Configuring Apache Derby for Performance and Durability Olav Sandstå
Configuring Apache Derby for Performance and Durability Olav Sandstå Database Technology Group Sun Microsystems Trondheim, Norway Overview Background > Transactions, Failure Classes, Derby Architecture
Software Environment. Options. Service guarantee:. 24/7 Hardware Support. 99% uptime
Hosting : VPS 35 Service Specifications 35GB of Disk Space 2GB of RAM 100GB of Bandwidth 1 IP Address Included Hardware Specifications CPU: Xeon Lynnfield Quad-Core X3430 2.4GHz Hard drives: 2 x SAS SATA
Maintaining Non-Stop Services with Multi Layer Monitoring
Maintaining Non-Stop Services with Multi Layer Monitoring Lahav Savir System Architect and CEO of Emind Systems [email protected] www.emindsys.com The approach Non-stop applications can t leave on their
Shared Parallel File System
Shared Parallel File System Fangbin Liu [email protected] System and Network Engineering University of Amsterdam Shared Parallel File System Introduction of the project The PVFS2 parallel file system
Big Data Storage Options for Hadoop Sam Fineberg, HP Storage
Sam Fineberg, HP Storage SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations
HDFS Under the Hood. Sanjay Radia. [email protected] Grid Computing, Hadoop Yahoo Inc.
HDFS Under the Hood Sanjay Radia [email protected] Grid Computing, Hadoop Yahoo Inc. 1 Outline Overview of Hadoop, an open source project Design of HDFS On going work 2 Hadoop Hadoop provides a framework
owncloud Architecture Overview
owncloud Architecture Overview Time to get control back Employees are using cloud-based services to share sensitive company data with vendors, customers, partners and each other. They are syncing data
SCF/FEF Evaluation of Nagios and Zabbix Monitoring Systems. Ed Simmonds and Jason Harrington 7/20/2009
SCF/FEF Evaluation of Nagios and Zabbix Monitoring Systems Ed Simmonds and Jason Harrington 7/20/2009 Introduction For FEF, a monitoring system must be capable of monitoring thousands of servers and tens
Scale out NAS on the outside, Object storage on the inside
Exablox Company and Business Update Douglas Brockett, CEO Launched April 2013 (Founded 2010) OneBlox - scale-out storage for the enterprise - converged storage for primary and backup / archival data OneSystem
The Cloud to the rescue!
The Cloud to the rescue! What the Google Cloud Platform can make for you Aja Hammerly, Developer Advocate twitter.com/thagomizer_rb So what is the cloud? The Google Cloud Platform The Google Cloud Platform
HDFS Architecture Guide
by Dhruba Borthakur Table of contents 1 Introduction... 3 2 Assumptions and Goals... 3 2.1 Hardware Failure... 3 2.2 Streaming Data Access...3 2.3 Large Data Sets... 3 2.4 Simple Coherency Model...3 2.5
The deployment of OHMS TM. in private cloud
Healthcare activities from anywhere anytime The deployment of OHMS TM in private cloud 1.0 Overview:.OHMS TM is software as a service (SaaS) platform that enables the multiple users to login from anywhere
Michael Thomas, Dorian Kcira California Institute of Technology. CMS Offline & Computing Week
Michael Thomas, Dorian Kcira California Institute of Technology CMS Offline & Computing Week San Diego, April 20-24 th 2009 Map-Reduce plus the HDFS filesystem implemented in java Map-Reduce is a highly
The Hadoop Framework
The Hadoop Framework Nils Braden University of Applied Sciences Gießen-Friedberg Wiesenstraße 14 35390 Gießen [email protected] Abstract. The Hadoop Framework offers an approach to large-scale
Hadoop Distributed File System. T-111.5550 Seminar On Multimedia 2009-11-11 Eero Kurkela
Hadoop Distributed File System T-111.5550 Seminar On Multimedia 2009-11-11 Eero Kurkela Agenda Introduction Flesh and bones of HDFS Architecture Accessing data Data replication strategy Fault tolerance
Hadoop: Embracing future hardware
Hadoop: Embracing future hardware Suresh Srinivas @suresh_m_s Page 1 About Me Architect & Founder at Hortonworks Long time Apache Hadoop committer and PMC member Designed and developed many key Hadoop
Cloud Based Application Architectures using Smart Computing
Cloud Based Application Architectures using Smart Computing How to Use this Guide Joyent Smart Technology represents a sophisticated evolution in cloud computing infrastructure. Most cloud computing products
GraySort and MinuteSort at Yahoo on Hadoop 0.23
GraySort and at Yahoo on Hadoop.23 Thomas Graves Yahoo! May, 213 The Apache Hadoop[1] software library is an open source framework that allows for the distributed processing of large data sets across clusters
CERNBox + EOS: Cloud Storage for Science
Data & Storage Services CERNBox + EOS: Cloud Storage for Science CERN IT Department CH-1211 Geneva 23 Switzerland www.cern.ch/it Presenter: Luca Masce. Thanks to: Jakub T. Mościcki, Andreas J. Peters,
Layers of Caching: Key to scaling your website. Lance Albertson -- [email protected] Narayan Newton [email protected]
Layers of Caching: Key to scaling your website Lance Albertson -- [email protected] Narayan Newton [email protected] Importance of Caching RAM is fast! Utilize resources more efficiently Improve
Large scale processing using Hadoop. Ján Vaňo
Large scale processing using Hadoop Ján Vaňo What is Hadoop? Software platform that lets one easily write and run applications that process vast amounts of data Includes: MapReduce offline computing engine
Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee [email protected] June 3 rd, 2008
Hadoop Distributed File System Dhruba Borthakur Apache Hadoop Project Management Committee [email protected] June 3 rd, 2008 Who Am I? Hadoop Developer Core contributor since Hadoop s infancy Focussed
Hadoop IST 734 SS CHUNG
Hadoop IST 734 SS CHUNG Introduction What is Big Data?? Bulk Amount Unstructured Lots of Applications which need to handle huge amount of data (in terms of 500+ TB per day) If a regular machine need to
DSS. High performance storage pools for LHC. Data & Storage Services. Łukasz Janyst. on behalf of the CERN IT-DSS group
DSS High performance storage pools for LHC Łukasz Janyst on behalf of the CERN IT-DSS group CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it Introduction The goal of EOS is to provide a
Large-Scale Web Applications
Large-Scale Web Applications Mendel Rosenblum Web Application Architecture Web Browser Web Server / Application server Storage System HTTP Internet CS142 Lecture Notes - Intro LAN 2 Large-Scale: Scale-Out
Take An Internal Look at Hadoop. Hairong Kuang Grid Team, Yahoo! Inc [email protected]
Take An Internal Look at Hadoop Hairong Kuang Grid Team, Yahoo! Inc [email protected] What s Hadoop Framework for running applications on large clusters of commodity hardware Scale: petabytes of data
Red Hat Cluster Suite
Red Hat Cluster Suite HP User Society / DECUS 17. Mai 2006 Joachim Schröder Red Hat GmbH Two Key Industry Trends Clustering (scale-out) is happening 20% of all servers shipped will be clustered by 2006.
Sep 23, 2014. OSBCONF 2014 Cloud backup with Bareos
Sep 23, 2014 OSBCONF 2014 Cloud backup with Bareos OSBCONF 23/09/2014 Content: Who am I Quick overview of Cloud solutions Bareos and Backup/Restore using Cloud Storage Bareos and Backup/Restore of Cloud
Managing MySQL Scale Through Consolidation
Hello Managing MySQL Scale Through Consolidation Percona Live 04/15/15 Chris Merz, @merzdba DB Systems Architect, SolidFire Enterprise Scale MySQL Challenges Many MySQL instances (10s-100s-1000s) Often
Implementing the SUSE Linux Enterprise High Availability Extension on System z
Implementing the SUSE Linux Enterprise High Availability Extension on System z Mike Friesenegger Novell Monday, February 28, 2011 Session Number: 8474 Agenda What is a high availability (HA) cluster? What
out of this world guide to: POWERFUL DEDICATED SERVERS
out of this world guide to: POWERFUL DEDICATED SERVERS Our dedicated servers offer outstanding performance for even the most demanding of websites with the latest Intel & Dell technology combining unparalleled
UNISOL SysAdmin. SysAdmin helps systems administrators manage their UNIX systems and networks more effectively.
1. UNISOL SysAdmin Overview SysAdmin helps systems administrators manage their UNIX systems and networks more effectively. SysAdmin is a comprehensive system administration package which provides a secure
Rails Application Deployment. July 2007 @ Philly on Rails
Rails Application Deployment July 2007 @ Philly on Rails What Shall We Deploy Tonight? Blogging/publishing system Standard Rails application Ships with gems in vendor directory Easy rake task for database
Detailed Outline of Hadoop. Brian Bockelman
Detailed Outline of Hadoop Brian Bockelman Outline of Hadoop Before we dive in to an installation, I wanted to survey the landscape. HDFS Core Services Grid services HDFS Aux Services Putting it all together
PADS GPFS Filesystem: Crash Root Cause Analysis. Computation Institute
PADS GPFS Filesystem: Crash Root Cause Analysis Computation Institute Argonne National Laboratory Table of Contents Purpose 1 Terminology 2 Infrastructure 4 Timeline of Events 5 Background 5 Corruption
<Insert Picture Here> Big Data
Big Data Kevin Kalmbach Principal Sales Consultant, Public Sector Engineered Systems Program Agenda What is Big Data and why it is important? What is your Big
Bigdata High Availability (HA) Architecture
Bigdata High Availability (HA) Architecture Introduction This whitepaper describes an HA architecture based on a shared nothing design. Each node uses commodity hardware and has its own local resources
MySQL Reference Architectures for Massively Scalable Web Infrastructure
MySQL Reference Architectures for Massively Scalable Web Infrastructure MySQL Best Practices for Innovating on the Web A MySQL Strategy White Paper April 2011 Table of Contents Executive Summary... 3!
Cloud Computing. Adam Barker
Cloud Computing Adam Barker 1 Overview Introduction to Cloud computing Enabling technologies Different types of cloud: IaaS, PaaS and SaaS Cloud terminology Interacting with a cloud: management consoles
Large Scale Storage. Orlando Richards, Information Services [email protected]. LCFG Users Day, University of Edinburgh 18 th January 2013
Large Scale Storage Orlando Richards, Information Services [email protected] LCFG Users Day, University of Edinburgh 18 th January 2013 Overview My history of storage services What is (and is not)
Distributed Storage Systems
Distributed Storage Systems John Leach [email protected] twitter @johnleach Brightbox Cloud http://brightbox.com Our requirements Bright box has multiple zones (data centres) Should tolerate a zone failure
owncloud Enterprise Edition on IBM Infrastructure
owncloud Enterprise Edition on IBM Infrastructure A Performance and Sizing Study for Large User Number Scenarios Dr. Oliver Oberst IBM Frank Karlitschek owncloud Page 1 of 10 Introduction One aspect of
Last time. Today. IaaS Providers. Amazon Web Services, overview
Last time General overview, motivation, expected outcomes, other formalities, etc. Please register for course Online (if possible), or talk to Yvonne@CS Course evaluation forgotten Please assign one volunteer
Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.
Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance
How To Monitor Mysql With Zabbix
MySQL Performance Monitoring with Zabbix An alternative to the MySQL Enterprise Monitor? by Oli Sennhauser [email protected] http:// 1 How many of you... monitor their database servers? monitor
Hitachi Content Platform as a Continuous Integration Build Artifact Storage System
Hitachi Content Platform as a Continuous Integration Build Artifact Storage System A Hitachi Data Systems Case Study By Hitachi Data Systems March 2015 Contents Executive Summary... 2 Introduction... 3
