Flexible Scalable Hardware independent Solutions for Long Term Archiving
More than 20 years of experience in archival storage 2 OA HPA 2010 1992 2000 2004 2007 Mainframe Tape Libraries Open System Tape Libraries Archive Systeme FMA Software HP OEM FSE & FMA ARCHIVEMANAGER FILELOCK OPENARCHIVE FSA HPA
Positioning of GRAU DATA 3 Concentration on Software Solutions for Data Archiving Flexible, scalable, hardware independent Compliant Archiving according to local legislation ARCHIVEMANAGER Tiered HSM & Archiving Software (Win & Linux) Scalable to PetaBytes OPENARCHIVE as Linux Open Source Derivative FILELOCK Compliant archiving on standard disk storage 0,5 TB 20 TB target market FileServerArchiver (FSA) - Datamigration for Windows Fileserver
ARCHIVEMANAGER & OPENARCHIVE 4 Development since 2000 Scalable solution from 1 TB - PB, ExaByte.... Linux and Windows Common Code Base FileSystem Interface (NFS; CIFS ) API for additional functionality as an option available New software architecture with version 4 in 2012 OPENARCHIVE = Linux based Open Source Version
How it works 5 DMS Email / Files Scientific Data Others PACS VIDEO PrePress CAD/CAM NFS / CIFS TCP/IP (remote) Archive Server (ARCHIVEMANAGER) SAN (remote) Disk- Archive Disk Media Disk- Archive Disk Media Tape Archive Tape Media Tape Archive Tape Media
GAM - Integration into Standard Filesystems 6 Application Meta Data Management (MDM) Operating System Event Filter File System GAM - Client Managing the File System activities GAM - Server Managing the Tiers
Client Server Architecture 7 Advantages of different configurations Keep it simple (all in one box) Several clients for high throughput Windows / Linux Clients for seamless integration into CIFS or NFS world CIFS NFS Client 1 Client 2 Client n Windows Client Linux Client Client Server GAM Server Win / Linux GAM Server Linux Disk Tape Disk Tape Disk Tape
Need for High Performance Archive 8 Standard File Systems have limitations GAM inherits those limitations Major limits: The amount of files within one File System The throughput of one single File System Throughput for huge files (10 TB and more) The integration of GAM into parallel File Systems breakes these limits
GAM - Integration with Parallel Filesystems 9 Analysing the market, we found two suitable parallel file systems for an integration with the GRAU ArchiveManager. These two products are: > Lustre > FhGFS
Basic Architecture of those Parallel File Systems 10 Client Client Client Client Client Client Client Client Client Client Client Node Node Node Meta Data Server Meta Data Server
Integration of GAM in Parallel FS 11 Client Client Client Client Client Client Client Client Client Client Client Node G A M Node G A M Node G A M Meta Data Server Meta Data Server M D M M D M
High Performance Archive based on Tape 12 Application Application node 1 node 2 node n
High Performance Archive based on Disk 13 Application Application node 1 node 2 node n Any Type of disk Any Type of disk Any Type of disk
High Performance Archive Disk and Tape 14 Application Application node 1 node 2 node n
High Performance Archive 15 A node based on standard hardware is expected to run a sustained data rate above 200 MB / sec. So the performance target for a system with 20 storage nodes is expected with: > HSM throughput per hour = 12 TB > HSM throughput per day = 250 TB A system with 100 nodes should be able to run as much as 1 PetaByte per day
Status of Integration with FhGFS 16 Cooperation with Fraunhofer started early year 2012 A prototype installation of GAM plus FhGFS is running at the High Performance Computing Center (HLRS) at the University of Stuttgart. An tighter integration of GAM into FhGFS is planed for Q3/2012 A release of GAM in combination with FhGFS is planned for Q4/2012
The Final Destination of Your Data www.graudata.com