Distributed Storage Networks and Computer Forensics



Similar documents
Algorithms and Methods for Distributed Storage Networks 8 Storage Virtualization and DHT Christian Schindelhauer

Christian Schindelhauer Technical Faculty Computer-Networks and Telematics University of Freiburg

The Google File System

The Google File System

Lecture 5: GFS & HDFS! Claudia Hauff (Web Information Systems)! ti2736b-ewi@tudelft.nl

Distributed File Systems

Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms

RAID. Tiffany Yu-Han Chen. # The performance of different RAID levels # read/write/reliability (fault-tolerant)/overhead

Distributed File Systems

Massive Data Storage

August Transforming your Information Infrastructure with IBM s Storage Cloud Solution

IBM Global Technology Services November Successfully implementing a private storage cloud to help reduce total cost of ownership

Storage Systems Autumn Chapter 6: Distributed Hash Tables and their Applications André Brinkmann

Cloud Computing at Google. Architecture

Google File System. Web and scalability

Moving Virtual Storage to the Cloud

1. Comments on reviews a. Need to avoid just summarizing web page asks you for:

Network Attached Storage. Jinfeng Yang Oct/19/2015

Moving Virtual Storage to the Cloud. Guidelines for Hosters Who Want to Enhance Their Cloud Offerings with Cloud Storage

Quantum StorNext. Product Brief: Distributed LAN Client

V:Drive - Costs and Benefits of an Out-of-Band Storage Virtualization System

Journal of science STUDY ON REPLICA MANAGEMENT AND HIGH AVAILABILITY IN HADOOP DISTRIBUTED FILE SYSTEM (HDFS)

Jeffrey D. Ullman slides. MapReduce for data intensive computing

Big Table A Distributed Storage System For Data

How To Back Up A Computer To A Backup On A Hard Drive On A Microsoft Macbook (Or Ipad) With A Backup From A Flash Drive To A Flash Memory (Or A Flash) On A Flash (Or Macbook) On

Storage Networking Overview

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

Big Data Processing in the Cloud. Shadi Ibrahim Inria, Rennes - Bretagne Atlantique Research Center

Scala Storage Scale-Out Clustered Storage White Paper

Lecture 3: Scaling by Load Balancing 1. Comments on reviews i. 2. Topic 1: Scalability a. QUESTION: What are problems? i. These papers look at

Hypertable Architecture Overview

The Hadoop Distributed File System

Ultimate Guide to Oracle Storage

Web DNS Peer-to-peer systems (file sharing, CDNs, cycle sharing)

Distributed File Systems

Distributed File Systems

Large Scale Storage. Orlando Richards, Information Services LCFG Users Day, University of Edinburgh 18 th January 2013

POWER ALL GLOBAL FILE SYSTEM (PGFS)

File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System

MASSIVE DATA PROCESSING (THE GOOGLE WAY ) 27/04/2015. Fundamentals of Distributed Systems. Inside Google circa 2015

Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle

Ceph. A file system a little bit different. Udo Seidel

Outline. Failure Types

AIX NFS Client Performance Improvements for Databases on NAS

PARALLELS CLOUD STORAGE

Sunita Suralkar, Ashwini Mujumdar, Gayatri Masiwal, Manasi Kulkarni Department of Computer Technology, Veermata Jijabai Technological Institute

Distributed Data Stores

Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation

CS2510 Computer Operating Systems

CS2510 Computer Operating Systems

Implementation Issues of A Cloud Computing Platform

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

Technical Overview Simple, Scalable, Object Storage Software

Storage and High Availability with Windows Server 10971B; 4 Days, Instructor-led

Cloud Storage. Parallels. Performance Benchmark Results. White Paper.

How To Design A Data Center

OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006

Chapter 12: Mass-Storage Systems

Seminar Presentation for ECE 658 Instructed by: Prof.Anura Jayasumana Distributed File Systems

Index Terms : Load rebalance, distributed file systems, clouds, movement cost, load imbalance, chunk.

Distributed Metadata Management Scheme in HDFS

STORAGE Arka Service s.r.l.

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering

Windows Server 2012 R2 Hyper-V: Designing for the Real World

DAS to SAN Migration Using a Storage Concentrator

Cluster Computing. ! Fault tolerance. ! Stateless. ! Throughput. ! Stateful. ! Response time. Architectures. Stateless vs. Stateful.

Hadoop Distributed File System. T Seminar On Multimedia Eero Kurkela

CHAPTER 17: File Management

Survey on Load Rebalancing for Distributed File System in Cloud

Archive Data Retention & Compliance. Solutions Integrated Storage Appliances. Management Optimized Storage & Migration

Best Practices for Data Sharing in a Grid Distributed SAS Environment. Updated July 2010

Virtualization, Business Continuation Plan & Disaster Recovery for EMS -By Ramanj Pamidi San Diego Gas & Electric

A Survey of Shared File Systems

VERITAS Storage Foundation 4.3 for Windows

10971B: Storage and High Availability with Windows Server

HDFS Under the Hood. Sanjay Radia. Grid Computing, Hadoop Yahoo Inc.

Hewlett Packard - NBU partnership : SAN (Storage Area Network) или какво стои зад облаците

Optimizing Large Arrays with StoneFly Storage Concentrators

Four Reasons To Start Working With NFSv4.1 Now

Block based, file-based, combination. Component based, solution based

Technology Insight Series

WHITE PAPER. Permabit Albireo Data Optimization Software. Benefits of Albireo for Virtual Servers. January Permabit Technology Corporation

High Availability Databases based on Oracle 10g RAC on Linux

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

A Content-Based Load Balancing Algorithm for Metadata Servers in Cluster File Systems*

SnapServer NAS GuardianOS 6.5 Compatibility Guide May 2011

Running a Workflow on a PowerCenter Grid

Scale and Availability Considerations for Cluster File Systems. David Noy, Symantec Corporation

Course 10971:Storage and High Availability with Windows Server

Understanding Disk Storage in Tivoli Storage Manager

High Availability Storage

Distributed File System Choices: Red Hat Storage, GFS2 & pnfs

A Brief Analysis on Architecture and Reliability of Cloud Based Data Storage

SAN Conceptual and Design Basics

COSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters

Practical Challenges in Scaling Storage Networks

M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2.

VMware vsphere Data Protection 6.0

Table of contents. Matching server virtualization with advanced storage virtualization

Transcription:

Distributed Storage Networks and Computer Forensics 7 Storage Virtualization and DHT Technical Faculty Winter Semester 2011/12

Overview Concept of Virtualization Storage Area Networks Principles Optimization Distributed File Systems Without virtualization, e.g. Network File Systems With virtualization, e.g. Google File System Distributed Wide Area Storage Networks Distributed Hash Tables Peer-to-Peer Storage 2

Concept of Virtualization Principle A virtual storage constitutes handles all application accesses to the file system The virtual disk partitions files and stores blocks over several (physical) hard disks Control mechanisms allow redundancy and failure repair Control Virtualization server assigns data, e.g. blocks of files to hard disks (address space remapping) Controls replication and redundancy strategy Adds and removes storage devices 3 File Hard Disks Virtual Disk

Storage Virtualization Capabilities Replication Pooling Disk Management Advantages Data migration Higher availability Simple maintenance Scalability Disadvantages Un-installing is time consuming Compatibility and interoperability Complexity of the system Classic Implementation Host-based - Logical Volume Management - File Systems, e.g. NFS Storage devices based - RAID Network based - Storage Area Network New approaches Distributed Wide Area Storage Networks Distributed Hash Tables Peer-to-Peer Storage 4

Storage Area Networks Virtual Block Devices without file system connects hard disks Advantages simpler storage administration more flexible servers can boot from the SAN effective disaster recovery allows storage replication Compatibility problems between hard disks and virtualization server 5

SAN Networking Networking FCP (Fibre Channel Protocol) - SCSI over Fibre Channel iscsi (SCSI over TCP/IP) HyperSCSI (SCSI over Ethernet) ATA over Ethernet Fibre Channel over Ethernet iscsi over InfiniBand FCP over IP http://en.wikipedia.org/wiki/storage_area_network 6

SAN File Systems File system for concurrent read and write operations by multiple computers without conventional file locking concurrent direct access to blocks by servers Examples Veritas Cluster File System Xsan Global File System Oracle Cluster File System VMware VMFS IBM General Parallel File System 7

Distributed File Systems (without Virtualization) aka. Network File System Supports sharing of files, tapes, printers etc. Allows multiple client processes on multiple hosts to read and write the same files concurrency control or locking mechanisms necessary Examples Network File System (NFS) Server Message Block (SMB), Samba Apple Filing Protocol (AFP) Amazon Simple Storage Service (S3) 8

Distributed File Systems with Virtualization Example: Google File System File system on top of other file systems with builtin virtualization System built from cheap standard components (with high failure rates) Few large files Only operations: read, create, append, delete - concurrent appends and reads must be handled High bandwidth important Replication strategy chunk replication master replication Application GFS client (file name, chunk index) (chunk handle, chunk locations) (chunk handle, byte range) chunk data GFS master File namespace /foo/bar chunk 2ef0 Instructions to chunkserver GFS chunkserver Linux file system Chunkserver state Figure 1: GFS Architecture GFS chunkserver Linux file system 4 step 1 file region Client Master and replication decisions using global knowledge. However, tent 2 TCP connection to the chunkserver clients, over alta we must minimize its involvement in reads and writes 3 so period of time. Third, it reduces the dividual size of th o that it does not become a bottleneck. Clients never read stored on the master. This allows usorder to keep onth a and write file data through the master. Instead, a client asks in memory, which in turn brings other butadvanta undefi Secondary the master which chunkservers it should contact. It caches will discuss in Section 2.6.1. Replica A this information for a limited time and interacts with the6 On the other hand, a large chunk size, even wit chunkservers directly for many subsequent operations. allocation, has its disadvantages. A3.2 small file Dac Let us explain the interactions for a simple read with reference to Figure 1. First, using the fixed chunk 7 size, the client storing those chunks may become hot small number of chunks, perhaps just one. WeThe deco c Primary use spots theifne m translates the file name and byte offset specified by the application into a chunk index within the file. Then, it sends been a major issue because our applications are5 accessing the same file. In practice, Replica client hottospo t Legend: pushed lin m the master a request containing the file name and chunk large multi-chunk files sequentially. in a pipel index. The master replies with the corresponding chunk However, hot spots did develop when GFS wa handle and locations of the replicas. The client caches this Control machine s 6 by a batch-queue system: an executable was wri information using the file name and chunk index Secondary as the key. as a single-chunk Data file and then started andon high-l hund The client then sends a request to one ofreplica the replicas, B chines at the same time. The few chunkservers through al most likely the closest one. The request specifies the chunk executable were overloaded by hundredsto of fully simu handle and a byte range within that chunk. Further reads quests. We fixed this problem by storing data is such pu of the same chunk require no more client-master Figure 2: interaction Write Control withand a higher Datareplication Flow factor andthan by making distr until the cached information expires or the file is reopened. queue system stagger application start eachtimes. mach The Google File System In fact, the client typically asks for multiple chunks in the long-term solution is to allow clients to ferread thedata Sanjay Ghemawat, same Howard request Gobioff, and theand master Shun-Tak can alsoleung include the information for chunks immediately following those requested. This clients in such situations. becomes unreachable Computer or repliesnetworks that it no longer and holds Telematics multiple r a lease. To avoid extra information sidesteps several future client-master interactions at practically no extra inter-switc 2.6 Metadata 9 3. cost. The client pushes the data tothe all master the replicas. stores three A client major types of metad machine fo can do so in any order. Each and chunk chunkserver Christian namespaces, will Schindelhauer the 2.5 Chunk Size store mapping from files and the locations of each chunk s replicas. network Allt the data in an internal LRU buffer cache until the Chunk size is one of the key design parameters. We have kept in the master s memory. The first client twois typ p chosen 64 MB, which is much larger data than is typical used or file aged sys- out. By decoupling the data flow paces and file-to-chunk mapping) aresends also kept thep from the control flow, we can improve performance by Legend: Data messages Control messages

Distributed Wide Area Storage Networks Distributed Hash Tables Relieving hot spots in the Internet Caching strategies for web servers Peer-to-Peer Networks Distributed file lookup and download in Overlay networks Most (or the best) of them use: DHT 10

WWW Load Balancing Web surfing: www.apple.de www.uni-freiburg.de Web servers offer web pages www.google.com Web clients request web pages Most of the time these requests are independent Requests use resources of the web servers bandwidth computation time Christian Stefan Arne 11

Load Some web servers have always high load www.google.com for permanent high loads servers must be sufficiently powerful Some suffer under high fluctuations e.g. special events: - jpl.nasa.gov (Mars mission) - cnn.com (terrorist attack) Monday Tuesday Wednesday Server extension for worst case not reasonable Serving the requests is desired 12

Load Balancing in the WWW Monday Tuesday Wednesday Fluctuations target some servers A B A B A B (Commercial) solution Service providers offer exchange servers an Many requests will be distributed among these servers But how? A B 13

Literature Leighton, Lewin, et al. STOC 97 Consistent Hashing and Random Trees: Distributed Caching Protocols for Relieving Hot Spots on the World Wide Web Used by Akamai (founded 1997) Web-Cache 14

Start Situation Without load balancing Advantage simple Disadvantage servers must be designed for worst case situations Web-Server Web pages request Web-Clients 15

Site Caching Web-Server The whole web-site is copied to different web caches Browsers request at web server Web server redirects requests to Web- Cache Web-Cache delivers Web pages Advantage: good load balancing Disadvantage: bottleneck: redirect large overhead for complete web-site replication redirect Web-Cache Web-Clients 16

Proxy Caching Web-Server Each web page is distributed to a few web-caches Only first request is sent to web server Links reference to pages in the webcache Advantage: No bottleneck Disadvantages: Load balancing only implicit High requirements for placements redirect 2. Link 3. request 1. 4. Then, web clients surfs in the webcache Web- Cache Web-Client 17

Requirements Balance fair balancing of web pages Dynamics Efficient insert and delete of webcache-servers and files new X X Views Web-Clients see different set of web-caches 18

Hash Functions Buckets Items Set of Items: Set of Buckets: Example: 19

Ranged Hash-Funktionen Given: Items, Number Caches (Buckets), Bucket set: Views Ranged Hash Function: Prerequisite: for alle views Buckets View Items 20

First Idea: Hash Function Algorithm: Choose Hash function, e.g. 3 i + 1 mod 4 5 2 9 4 3 6 n: number of Cache servers Balance: very good Dynamics Insert or remove of a single cache server New hash functions and total rehashing Very expensive!! 0 1 2 3 2 i + 2 mod 3 5 2 9 4 3 6 X 0 1 2 3 21

Requirements of the Ranged Hash Functions Monotony After adding or removing new caches (buckets) no pages (items) should be moved Balance All caches should have the same load Spread (Verbreitung,Streuung) A page should be distributed to a bounded number of Load caches No Cache should not have substantially more load than the average 22

Monotony After adding or removing new caches (buckets) no pages (items) should be moved Formally: For all Pages View 1: View 2: Caches Caches Pages 23

Balance For every view V the is the f V (i) balanced For a constant c and all : Pages View 1: View 2: Caches Caches Pages 24

Spread The spread σ(i) of a page i is the overall number of all necessary copies (over all views) View 1: View 2: View 3: 25

Load The load λ(b) of a cache b is the over-all number of all copies (over all views) wher!!!!! in View V := set of all pages assigned to bucket b View 1: View 2: λ(b 1 ) = 2 λ(b 2 ) = 3 View 3: b 1 b 2 26

Distributed Hash Tables Theorem There exists a family of hash function with the following properties Each function f F is monotone C! number of caches (Buckets) C/t! minimum number of caches per View V/C = constant (#Views / #Caches) I = C! (# pages = # Caches)! Balance: For every view! Spread: For each page i with probability! Load: For each cache b mit W keit 27

The Design 2 Hash functions onto the reals [0,1] maps k log C copies of cache b randomly to [0,1] maps web page i randomly to the interval [0,1] := Cache, which minimizes Caches (Buckets): View 1 0 1 View 2 0 1 Webseiten (Items): 28

Monotony := Cache which minimizes For all : Observe: blue interval in V 2 and in V 1 empty! View 1 0 1 View 2 0 1 29

2. Balance Balance: For all views Caches (Buckets): Choose fixed view and a web page i Apply hash functions and. Under the assumption that the mapping is random every cache is chosen with the same probability View 0 1 Webseiten (Items): 30

3. Spread σ(i) = number of all necessary copies (over all views) C! number of caches (Buckets) C/t! minimum number of caches per View V/C = constant (#Views / #Caches) I = C! (# pages = # Caches) ever user knows at least a fraction of 1/t over the caches For every page i with prob. Proof sketch: Every view has a cache in an interval of length t/c (with high probability) The number of caches gives an upper bound for the spread 0 t/c 2t/C 1 31

4. Load Last (load): λ(b) = Number of copies over all views where := wet of pages assigned to bucket b under view V For every cache be we observe!!!!! with probability Proof sketch: Consider intervals of length t/c With high probability a cache of every view falls into one of these intervals The number of items in the interval gives an upper bound for the load 0 t/c 2t/C 1 32

Summary Distributed Hash Table is a distributed data structure for virtualization with fair balance provides dynamic behavior Standard data structure for dynamic distributed storages 33

Distributed Storage Networks and Computer Forensics 7 Storage Virtualization and DHT Technical Faculty Winter Semester 2011/12