Massive Data Storage Storage on the "Cloud" and the Google File System paper by: Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung presentation by: Joshua Michalczak COP 4810 - Topics in Computer Science Dr. Marinescu, Spring 2011, UCF
Outline A history of data storage Defining "Massive" data storage Defining the properties of a good storage system Google's storage - the Google File System (GFS) Cloud storage and you: present and future
A history of storage - 1970s Internal 4K ROM expandable to 12K Used Cassette Tapes for external I/O Capacity based on read/write speed Roughly 200K of storage
A history of storage - 1980s First introduced in 1971 More expensive than cassettes Equivalent storage capacity Not many computers yet offered diskette drives Popularity rose in early 1980s Many competing manufacturers (cheaper) Larger capacities (> 1MB) Most machines offered diskette peripherals (Commodore 64), or used it exclusively (Apple II, Macintosh)
A history of storage - 1990s to present Hard drives first introduced in 1957 Reserved for "macrocomputers" Very expensive; $3,200 / month Cost and drive size limit adoption in households 1973 - IBM - first "sealed" hard drive 1980 - Seagate - first hard drive for microcomputers; 5 MB for $450 1980 - IBM - first 1GB hard drive; size of a refridgerator for $40,000 Drops in size (3-1/2 inch, 1988), cost, and the introduction of interface standards (SCSI, 1986; IDE, 1986) led to larger household adoption.
Defining "Massive" data storage Things to consider: File type (text documents, pictures, movies, programs, etc) Cloud capacity vs. Local capacity Transfer rate (internet speed) A typical individual is unlikely to generate a "massive" amount of storage At $100 / 1TB, why not just buy another drive? Consider then services with large clientele Internet index & search: Google, Bing, Yahoo, etc Data storage & sharing: Flickr, YouTube, Google Docs, Facebook, DropBox, etc Google's storage needs: 2006: Reported that crawler alone uses 800TB Most (~97%) of files < 1GB
What makes a good storage system? Let's ask Jim Gray received Turing Award in 1988 "for seminal contributions to database and transaction processing research and technical leadership in system implementation" Defined "ACID" (atomicity, consistency, isolation, and durability) properties that guarantee the reliability of database transactions although originally designed for databases, these terms can apply to all forms of data storage, include our "on-thecloud" model
Atomicity "all or nothing" If any portion of a transaction fails, the entire transaction fails Failed transactions should leave the data unchanged; only complete transactions change the system's state The reason for failure should not matter (Hardware failure, system failure, connection loss, etc) Atomic transactions can't be subdivided Why is it important? Prevents data errors from transaction failure ("roll-back")
Consistency "database integrity" The database remains "truthful" to it's intent Transactions move the database from one consistent state to another Only "valid" data is written; "invalid" data is handled as per the implementation requirements validity is determined by a set of data constraints Why is it important? When we access data, the data present matches our expectations
Isolation "incomplete modifications are invisible" Other operations cannot access data being modified by a transaction which has not yet completed Concurrent transactions should be unaware of each other (beyond possibly having to wait for access) Does this mean we can't have concurrency? No. It just means that if we do have concurrency, that we take extra precautions to prevent data contamination It may make implementing concurrency "harder" (the naive solution will likely not work) Why bother? Prevents the database from going into inconsistent states because of transaction interleaving
Durability "ability to recover from failure" When a transaction reports back as complete and successful, any modifications it made should be impervious to system failure This means that the system should never have to roll-back completed transactions partially completed transactions at the time of failure won't affect the system (atomic) the state of the database should be valid (consistent) Why is it important All systems eventually fail It is best to consider failure at design time rather than as an after-thought.
"ACIDS" - Scalability "Capacity growth" The data capacity of the system should be expandable adding additional storage should be possible, if not easy Data access / commits run at an acceptable rate, even during high system usage "acceptable" will be application specific How dare you?! I know! I'm sorry :( Why would you suggest such a thing? To comply with our "massive" requirements: large amounts of data being sent and received by many users simultaneously.
GFS - System Requirements Uses commodity hardware; often fails Millions of files, typically > 100MB, some > 1GB Workload consists mostly of reads (large, sequential reads and small, random reads) Workload may also contain large, sequential writes to append data; files seldom modified after written High concurrency required; often 100+ clients appending simultaneously; atomic with minimal synchronization overhead High bandwidth required before high latency; most applications will be processing data in bulk in a non-timesensitive manner.
GFS - Files
GFS - System Overview
GFS - Atomicity & Concurrency GFS implements a special file opperation: record append Client sends an append command, specifying only the data to be written (no offset as is typical) The file system executes these command atomicly, which prevents fragmentation from concurrent interleaving The file offset is returned to the client once the data is committed (for the client's future reference)
GFS - Consistency & Durability GFS implements several Fault Tolerance measures, including: Data redundancy; minimum of 3 copies per chunck Machine redundancy; 3 master nodes which are hotswappable should one fail Chunk checksum to detect data corruption; notifies master; new clean copy received from replica Fast recovery Master reboot: file hierarchy persistent; chunk location metadata rebuilt by probing network Chunkserver reboot: "thin", master controls metadata; once probed by master, available to network Chunkservers which report frequent errors (data errors, network errors, etc) are reported to humans for diagnosis Master servers are more heavily monitored (key point of failure)
GFS - Relaxing Isolation requirements GFS improves concurrency performance by ignoring some of the restrictions of isolation random overwrites don't interrupt reading creates situations where reads might be reading the data of incomplete write transactions; should the transaction fail, the data read is "bad" Does this break our database? Short answer: no, if you take the problems into account Applications are made aware of the fact that they might be reading "bad" data which is incomplete; they can repeat the operation if needed For Google, the point is moot random overwrites are small and will fail early, if they fail at all (which is unlikely) most applications only append data; random writes rare
GFS - Scalability At first glance, master nodes appear to bottleneck system All transactions must first be routed through the master for approval and location However, clients perform the raw data transfers "Heavy lifting" is distributed New Chunkservers are easy to add Install Linux; install chunkserver application; attatch to network Once probed by master, available for use Signals for control take place on a seperate network from signals for data Transferring data will not stall the control of the system
GFS - other topics in the paper "snapshot" - a fast file copy (using direct chunkserver-tochunkserver communication) discussion of the choice for 64MB chunk size detailed description of metadata memory structures master node operation log "leasing" and "mutation" of chunks (how chunks propagate to the replica chunkservers) detail discussion of network architecture locking implementation for concurrency when chunks should be replicate, where to put the replicas, and balancing resource use with replica placement garbage collection of "dead" chunks detected "stale" (out-of-date) replica chunks benchmarks (obviously the system works and is fast)
Cloud Storage and YOU! Chances are, you are already using several "cloud" based systems, including storage Google Docs, YouTube, Flickr, Facebook, Dropbox, online email, accounting, photo editing, etc Cloud storage offers many benefits to end users the service provider has the burden of reliability and they're probably doing a better job of it than you would... how many of you backup and replicate your data 3 times? if your hardware fails, who cares? you can just "download it again" when you fix the problem your data is accessible* everywhere; work on it at home on your desktop, present it in class on your laptop
Future problems for cloud storage to consider Data ownership - if I make a document in Google Docs, hosted on the Google Doc servers, who owns it? Availability - if a service provider goes down, the internet goes down, or the connection is otherwise unreliable, how will I get my data? Security - service providers host a large, central database of information which makes for a hot target (one penetration gives access to tons of information) Permissions - how do I guarantee who has access to my data when I don't directly control it? Diffusion - my data can be spread out over several providers; how will I know where to look for the data I want?
Bibliography 1. Radio Shack Catalogs, http://www.radioshackcatalogs. com/catalogs_extra/1977_rsc-01/ 2. TSR-80,http://en.wikipedia.org/wiki/TRS-80 3. Floppy Disk History, http://en.wikipedia.org/wiki/floppy_disk 4. Hard Drive History (1), http://www.duxcw. com/digest/guides/hd/hd2.htm 5. Hard Drive History (2), http://en.wikipedia. org/wiki/history_of_hard_disk_drives 6. Average internet speed, http://www.speedmatters. org/content/internet-speed-report 7. Google's storage needs, http://labs.google.com/papers/bigtable. html 8. ACID qualities, http://en.wikipedia.org/wiki/acid 9. The Google File System (GFS),http://labs.google.com/papers/gfs. html