Backup Implementation Proposal Document Revision:. 5//6 (wcw) 6 Carnegie Mellon University. All Rights Reserved 1
Ideal Multi-tier backup architecture Tier 1 Tier Tier Backup Server for client machines Backup Server For server machines Client backups Server backup Offsite Client Restores Restore of the client backup server Restore from off-site of the server backup Backups and restores are isolated per tier For example, a client restore only goes to the backup server for client machines; it never crosses to the backup server for server machines The best backup software is used at each tier (rather than relying on a package that does everything poorly) This optimizes for the common operations but makes the less common longer: if everything blew up, you d start by restoring from off-site, then you d restore the client backup servers before you could start restoring the clients. Tier 1: Desktop/Client Backup Optimized for user self-service restores; assumes variable bandwidth networking; assumes mostly file restores Tier : Server restores Optimized for entire disk restores (generally will only do a restore if a RAID set fails) Can assume fast networking Tier : Off-site Encryption likely required Much higher latency to access the data
Software Tier 1: LiveBackup Continuous Data Protection As changes are made, backups are saved to the cache and then sent to the server Network friendly Won t send duplicate data to the server Data is compressed and encrypted before sent Only sends changes incrementally so there isn t a big network burst when backups happen Uses https as a transport so backups can happen anywhere there is network connectivity to the server Bare metal restore Boot CD/DVD can be made to restore a system from a blank hard drive Users restore their own files using a native Windows interface Unfortunately Currently available under Windows only (though a mac port rumored) Server doesn t scale horizontally Reports of slowness on client machines Software assumes strong central IT control and doesn t empower users
Software Tier 1: Stage Stage is the existing system to back up AFS This allows us to keep running stage without having to either pick a package that supports AFS or write custom code to merge it in with another package. 4
Software Tier 1//: TiNA so much for the ideal model Crosses multiple tiers as follows: Tier 1: for Mac client backups Tier : Backs up Storactive/Stage (AFS); replaces Amanda; Database plug-ins Tier : Manages off-site data Features include: Synthentic Full After the first backup, only the changes need to be sent over the network. Less storage is required as you don t need to keep multiple full dumps around Tape encryption Good admin interface for doing restores Unfortunately Current version (v4.) doesn t support mutual authn and so can t deploy until the beta (v4.1) mutual auth is released. Doesn t scale well horizontally yet (though rumors are things are getting better on this front in 4.1). 5
Implementation: LiveBackup Single Dell 85 with local Initial pilot of machines Plan back up all Windows desktops Hardware should grow to at least 5 machines Current desktop/laptops being backed up via ArcServ will be migrated to LiveBackup Single Gigabit ethernet network connection to campus network. Expected to be on VLAN14. 6
PowerVault 1T PowerVault 1T Carnegie Mellon Implementation: TiNA Single Dell 85 with locally attached tape drives as the primary TiNA server Disk backup is added via NFS mount of RAID from the Storage nodes (don t want to put too many SCSI devices on a single box) Off-site backup is done via tape to the tape library (for now) Network interface to campus/clients is a single Gigabit ethernet connection on VLAN1. Network interface to storage is a single Gigabit ethernet connection to the high speed A1 internal network VLAN. This needs to be created by NG. Each storage node will have one service connection (VLAN1) and one connection to the high speed A1 internal network VLAN. NFS traffic will not go over the service connection. 7
PowerVault 1T PowerVault 1T Carnegie Mellon Rack Layout Tape drives attached to the server s PCI SCSI card Tina Backup Server Space reserved for two additional RAID arrays for the storage node storage node arrays atached to the PCI SCSI card Exact ordering to the diagram is not required Machines may share the same rack Diagram is not to scale The main thing that matters here is that space is that 6U of space for two RAID arrays is reserved for each storage node being installed 8
Hardware Summary Service Machine RU Gigabit ports Notes LiveBackup Server Dell 85 1 LiveBackup SCSI connection to LiveBackup Server Tina Server Dell 85 Tina tape Drive Dell 1T 4 SCSI connection to Tina Server Tina tape Drive Dell 1T 4 SCSI connection to Tina Server Tina Storage Node Dell 85 panel for future RAID unit panel for future RAID unit Tina Storage Node Dell 85 panel for future RAID unit panel for future RAID unit Tina Storage Node Dell 85 panel for future RAID unit panel for future RAID unit Totals 57 9 9