Linux Storage Management Petros Koutoupis
|
|
|
- Derek Simon
- 10 years ago
- Views:
Transcription
1 Linux Storage Management Petros Koutoupis For years now I have been working in the data storage industry, offering various services which include development and consultation. With regards to consultation, the services extend to storage customization, configuration and performance tuning. During these instances I have taken notice, that while I work with extremely intelligent individuals, the education is still somewhat lacking in how the operating system manages the storage volumes and how to tune all aspects relating to this topic for the ideal computing environment. This in turn can lead to disastrous results. The purpose of this article is to define some of the basics of all the layers involved in Linux 2.6 storage management along with key concepts to be aware of with regards to decision making and performance tuning. The idea is to give the reader a better insight into analyzing the environment that they are configuring the storage for and to be able to understand the I/O profile that it is supposed to cater to. This article will in turn be partitioned into the following main topics: (1) the outline of the I/O and SCSI Subsystems, (2) followed by a summary of how the storage is presented to the host, (3) and finally, real world methods of configuring and tuning your environment. What is I/O and How Do I Determine my I/O Profile? In order to fully understand the following material, it becomes necessary to cover the basics of I/O and I/O management. In general the concept of I/O (or Input/Output) is the ability to perform an input and/or output operation between a computer and a device. That device could be a keyboard or mouse functioning as input devices. It could also be an output device such as a monitor updating the coordinates of the mouse cursor based on user input. With regards to data storage, when I/O is spoken of, it usually signifies the input and output streams of data to a disk device (a.k.a block device). A block device is a device (i.e Hard Disk Drive, CD-ROM, floppy, et.c) that can host a file system and/or store non-volatile data. In most modern day computing systems, a block device can only handle I/O operations that transfer one or more whole blocks (or sectors), which are usually 512 bytes (or a larger power of two) in length. A block device can be accessed through a file system node mounted locally or networked and shared or through the physical device interface. The advantages to 30 LINUX+DVD 3/2009
2 utilizing a file system is to add organization in the storing of data so that it can be readable by both the user and the OS (with a corresponding application). File systems have become so advanced that there are numerous features written into the modules enabling the user to manage a redundant and/or high performing environment. Without the file system, the physical device would be a series of unintelligible sequence of byte values. There wouldn't be any metadata to identify the location of files including file specific information such as file size, permissions, etc. Moving back to I/O. There are numerous ways to initiate an I/O process to a storage device. To keep this as simple as possible we will not get into great depth in those details. Just note that the basic steps an application usually uses to generate I/O between the application layer of the Operating System and the end storage device are: Open the device or file. Set the location from which to read or write. Execute the read or write operation. Repeat steps 2 and 3 as needed. Close the device or file. Although these steps may seem a little simplistic, note that many variables are used to define how an I/O operation is performed. These variables are sometimes referred to as the I/O profile. Some of the parameters that define a configuration's I/O profile are: Transfer size The number of bytes or blocks transferred. Seeking method That is sequential, random or mixed. Range The area on which to execute the I/O and its size. A couple of examples are: the first 100 blocks of the disk device or creating a file with a file length of 1 GBs. Processes The amount of processes generating I/O operations that are running concurrently to a disk device. This includes the total number of hosts as well as the number of processes of I/O running to the end storage device. Data Pattern The data pattern filled into the write/read buffers and being send to/from the disk device. Timing The rate at which the I/Os are generated including the timing difference between read and write operations. While I have listed the most general parameters in the makeup of an I/O profile, note that the profile also includes the host's Host Bus Adapter (hereafter, HBA) type and configuration, the throttling levels of the SCSI layer's Queue Depth, the SCSI Disk Timeout Value, and if the disk device is in an array you must also include the stripe/chunk size of the disk device to even its RAID type and more. Understanding the makeup of the I/O profile is extremely important for development, design and problem isolation. When a process is initiated in User Space and it needs to read or write from a device, it will need enter the Kernel Space in order to resume that process. Massive details of the kernel including the different types are not going to be discussed here. It is a topic beyond the scope of this article. For further information on the internals of a kernel it is advised to pick up a book on kernel internals, or specific Operating System materials [1]. This is more of a basic overview of what the average kernel is responsible for. The majority of the roles taken on by the average kernel are: Process/Task Management The main task of a kernel is to allow the execution of applications. Note that I am using the terms process and application interchangeably to signify one and the same thing when in execution. That is because a process is an application in execution. This also includes the servicing of interrupt request (IRQ) routines. The kernel needs to perform context switching between hardware and software contexts. Figure bit VMA model Memory Management The kernel has complete access to the system's memory and must allow processes to safely access this memory as they need it. Device Management When processes need to access peripherals connected to the computer, controlled by the kernel through device drivers, the kernel allows such access. System Calls In order to accomplish task such as file I/O, the process is required to have access to memory, and that same process must be able to access various services that are provided by the kernel. The process will make a call for a function which in turn will invoke the related kernel function. The kernel internally refers to these processes as tasks. Each process or task is issued an ID unique to that running process alone and managed in the application layer, the kernel is unaware of the ID. A process should not be confused with a thread. A kernel thread means something else so when you are running 12 instances of I/O, you are running 12 processes of I/O and not 12 threads. Threads of execution are the objects of activity within the process (i.e. instructions that a processor has to do). A Process ID (PID) is an identifier to the execution of a binary, a living result of running program code. When the binary completes its execution, the process to that execution is wiped clean from memory. Here is an example from a system running Linux (on Windows you would invoke a tasklist): 31
3 # ps -ef grep bash root :43 pts/0 00:00:00 bash This process holds the PID of and this ID will be cleared once this process is either completed or aborted (either by user or error). If you want to find out more information about an actively running process, both Linux and UNIX make its information, such as memory mappings, file descriptor usage and more through its virtual file system procfs mounted from the root path at /proc. In it all actively running processes are organized according to their respective PID number. Memory Management Virtual Memory Addressing (VMA) is a memory management technique commonly utilized in multitasking Operating Systems, wherein non-contiguous memory is presented to a User Space process (i.e. a software application) as contiguous memory, and Figure 2. The I/O Subsystem Figure 3. The SCSI Subsystem referred to as the virtual address space. VMA is typically utilized in paged memory systems. Paging memory allocation algorithms divide a specific region of computer memory into small partitions, and allocate memory using a page as the smallest building block. The memory access part of paging is done at the hardware level via page tables, and is handled by the Memory Management Unit (MMU) local to the kernel. As mentioned earlier, physical memory is divided into small blocks called pages (typically 4 KB in 32-bit architectures and 8KB in 64-bit architectures) in size, and each block is assigned a page number. The operating system may keep a list of free pages in its memory, or may choose to probe the memory each time a memory request is made (though most modern operating systems do the former). Whatever the case, when a program makes a request for memory, the operating system allocates a number of pages to the program, and keeps a list of allocated pages for that particular program in memory. When paging is used alongside with virtual memory, the operating system has to keep track of pages in use and pages which will not be used or have not been used for some time. Then, when the operating system deems fit, or when a program requests a page that has been swapped out, the operating system swaps out a page to disk, and brings another page into memory. In this way, you can use more memory than your computer physically has. A paging file should NOT be confused with a swap file. The swap file and paging file are two different entities. Although both are used to create virtual memory, there are subtle differences between the two. The main difference lies in their names. Swap files operate by swapping a processes' memory regions from system memory into the swap file on the physical disk. Your Operating System usually allocates a finite size of swap space during installation. This swapping immediately frees up memory for other applications to use. In contrast, paging files function by moving pages of a program from system memory into the paging file. These pages are 4KB (again, usually determined by PC architecture) in size. The entire program does not get swapped wholesale into the paging file. For further reading onto the topic of the host side pages it is recommended to pick up a copy of a book discussing kernel internals. This topic is usually covered with greater detail. When you open up a file unbuffered to perform direct I/O, you are not using these cache pages. So every time you work with that recently modified and unbuffered file, you are going straight to the location in disk as opposed to memory instead holding the altered data in pages. This feature can hurt overall performance of file I/O to disk. By relying on paging (buffered I/O) you are in fact increasing your performance and productivity because you are not constantly going straight to the hard disk (which is obviously MUCH slower than dynamic memory) to obtain your data. While paging your data, you are able to retrieve and send your data very quickly while moving on to the next step in the file I/O process. There are certain moments when the data from a cache page will be flushed to disk. When data is cached to this page, this is marked as dirty because it is data that has not been written to disk. These dirty pages get written to disk when a Synchronize Cache is invoked, or when new data belonging to the same file area over writes the older paged data. In the latter case the older paged data 32 LINUX+DVD 3/2009
4 gets written to disk while the newer data gets paged to the same memory address marking that page as dirty once again. To keep things simple I will detail the 32-Bit VMA Model. The Random Access Memory (RAM) is separated into zones on the O/S. Zone 0: Direct memory Access (DMA) Region Zone 1: is a Linear mapping of the Kernel Space (see Figure 1) while shared with User Space calculations. Zone 2: (High Memory Zone): Is a nonlinear mapping of the last 1/8GB of the VMA. We obtain 4 GB of Virtual Memory addressable regions on a 32-bit Operating System because 2^32 equals or 4 GB. As for the User-Space Virtual 3 GB, this is addressed wherever there is free memory. We find our cache pages in kernel space of the VMA model. On a 64-bit architecture memory addressing performs much better. There is more linear mapping and less swapping in the high memory region. General Layout Some of you may be wondering why I am discussing all this information for a topic on. This information becomes vital with regards to understanding how the entire I/O subsystem functions in Linux. When you tune on one portion of the subsystem, you also need to understand the ramifications it may have on another portion. Which is why I present the following diagram: see Figure 2 To briefly summarize, the above diagram does a pretty good job of displaying the general layout of the I/O subsystem on any OS. When an I/O process is initiated for a disk device it first travels through the file system layer (assuming that it is a file over a file system) to figure out where it needs to go. Once that is figured out it is either paged (enabled by default) or thrown into a temporary buffer cache (for direct I/O). When the I/O is ready to be written it is thrown into another cache waiting for the scheduler. It is up to the scheduler to intervene and prioritize and coalesce the transfer(s). This occurs on both Synchronous and Asynchronous I/O. The I/O then gets sent to the block device driver (sd_mod) and follows through to the many other layers involved after that block device driver (scsi_mod then the HBA module). These layers vary depending on the block device being written to. One last thing to note with this diagram is that the process is running in User Mode and it hits Kernel Mode between the Process block and the File system block (on the diagram). Everything else below is Kernel Mode. This situation occurs when a user mode function or API is called which has a corresponding syscall (system call) into Kernel Mode such as a write() or read() function. When writing to a physical device, the file system layer is skipped and the I/O requests are placed into the temporary buffer cache for direct I/O. If you are transferring I/O to/from a SCSI based device (i.e. SCSI, SAS, Fibre Channel, etc.), it is when these transfers fall onto the SCSI driver that the CDB structure is populated and that the SCSI Disk Timeout Value initiates. If your I/O goes stale and you do not see any activity and it does not time out, chances are it never came to the SCSI layer. The transfer(s) are possibly stuck in one of the earlier layers. The SCSI Subsystems and Representation of Devices In Linux, the SCSI Subsystem exists as a multilayered interface divided into the Upper, Middle and Lower layers. The Upper Layer consists of device type identification modules (i.e. Disk Driver (sd), Tape Driver (st), CDROM Driver (sr) and Generic Driver (sg)). The Middle Layer's purpose is to connect both Upper and Lower Layers and in our case is the scsi_ mod.ko module. The Lower Layer is for the device drivers for the physical communication interfaces between the host's SCSI Layer and end target device. Here is where we will find the device driver to the HBA. Whenever the Lower Layer detects a newer SCSI device, it will then provide scsi_mod.ko with the appropriate host, bus (channel), target and LUN Ids. Depending on what type of media the devices are would determine what Upper Layer driver will be invoked. If you view /proc/scsi/scsi you can see what each SCSI device's type is: see Listing 1. The Direct-Access media type will utilize the sd_mod.ko while the CD-ROM media type Listing 1. Listing identified SCSI devices [pkoutoupis@linuxbox3 ~]$ cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: ATA Model: WDC WD800BEVS-08 Rev: 08.0 will utilize the sr_mod.ko. Each respective driver will allocate an available major and minor number to each newly discovered and properly identified device and on the 2.6 kernel, udev will create an appropriate node name for each device. As an example, the Direct-Access media type will be accessible through the /dev/sdb node name. When a device is removed, the physical interface driver will detect it from the Lower Layer and pass the information back up to the Upper Layer. There are multiple approaches to tuning the SCSI devices. Note that the more complex approach involves the editing of source code and recompiling the device driver to have these variables hard-coded during the lifetime of the utilized driver(s). That is not what we want, we want a more dynamic approach. Something that can be customized on-thefly. One day it may be optimal to configure a driver one way and the next another. The 2.6 Linux kernel introduced a new virtual file system to help reduce the clutter that became /proc (for those not familiar with the traditional UNIX file system hierarchy, this was originally intended for process information) with a sysfs file system mounted at /sys. To summarize, /sys contains all registered components to the Operating System's kernel. That is, you will find block devices, networking ports, devices and drivers, etc. mapped from this location and easily accessible from user space for enhanced configuration(s). It is through /sys that we will be able to navigate to the disk device and fine tune it to how we wish to utilize it. After I explain sysfs, I will move onto to describing modules and how a module can be inserted with fine-tuned and pseudostatic parameters. Let us assume that the disk device that we want to view the parameters to and possibly modify is /dev/sda. You would navigate your way to /sys/block/sda. All device details are stored or linked from this point for device node named /dev/sda. If you go to the device you can view time out Type: Direct-Access ANSI SCSI revision: 05 Host: scsi3 Channel: 00 Id: 00 Lun: 00 Vendor: MATSHITA Model: DVD-RAM UJ-860 Rev: RB01 Type: CD-ROM ANSI SCSI revision:
5 values, queue depth values, current states, vendor information and more (Listing 2). To view a parameter value you can simply open the file for a read. [pkoutoupis@linuxbox3 device]$ cat timeout 60 Here we can see that the timeout value for SCSI labeled device is 60 seconds. To modify the value you can echo the new value into it. [root@linuxbox3 device]# echo 180 >> timeout [root@linuxbox3 device]# cat timeout 180 You can perform the same task for the queue depth of the device along with the rest of the values. Approaching the disk device values Listing 2. Listing device parameters in sysfs [pkoutoupis@linuxbox3 device]$ ls this way are unfortunately not maintained statically. That means that every time the device mapping is refreshed (through a module removal/insertion, bus scan, or a reboot) the values restore back to their defaults. This can be both good and bad. A basic shell script can modify all values to all desired disk devices so that the user does not have to enter each device path and modify everything one by one. On top of the basic shell script a simple cron job can also validate that the values are maintained and if not it can rerun the original modifying shell script. Another way to modify values and have them pseudo-statically maintained is by inserting those values within the module itself. For example if you do a modinfo on scsi_mod you will see the following dumped to the terminal screen (Listing 3). The appropriate way to enable a pseudostatic value is to insert the module with that parameter: block:sda delete evt_media_change iodone_cnt modalias queue_depth rev scsi_generic:sg0 subsystem uevent bsg:0:0:0:0 device_blocked generic ioerr_cnt model queue_type scsi_device:0:0:0:0 scsi_level timeout vendor bus driver iocounterbits iorequest_cnt power rescan scsi_disk:0:0:0:0 state type Listing 3. Listing module parameters [pkoutoupis@linuxbox3 device]$ modinfo scsi_mod filename: mod.ko license: description: srcversion: depends: vermagic: /lib/modules/ fc8/kernel/drivers/scsi/scsi_ GPL SCSI core E9AA190FE1857E8BB fc8 SMP mod_unload 686 4KSTACKS dev_flags:given scsi_dev_flags=vendor:model:flags[,v:m:f] add black/white list entries for vendor and model with an integer value of flags to the scsi device info list (string) (int) (uint) default_dev_flags:scsi default device flag integer value max_luns:last scsi LUN (should be between 1 and 2^32-1) scan:sync, async or none (string) max_report_luns:report LUNS maximum number of LUNS received (should be between 1 and 16384) (uint) inq_timeout:timeout (in seconds) waiting for devices to answer INQUIRY. Default is 5. Some non-compliant devices need more. (uint) scsi_logging_level:a bit mask of logging levels (int) [pkoutoupis@linuxbox3 device]$ modprobe scsi_mod max_luns=255 Or modify the /etc/modprobe.conf (some platforms use an /etc/ modprobe.conf.local) file by appending an options scsi_mod max_luns=255 and then reinsert the module. In both cases you must rebuild the RAM Disk so that when the host reboots it will load max_luns=255 into the insertion of the scsi_mod module. This is what I meant by pseudo-static. The value is maintained only when it is inserted during the insertion of the module and must always be defined during its insertion to stay statically assigned. Some may now be asking, well what the heck is a timeout value and what does queue depth mean? A lot of resources with some pretty good information can easily be found on the Internet but as far as basic explanations go, a SCSI timeout value is the maximum value to which an outstanding SCSI command has to completion on that SCSI device. So for instance, when scsi_mod initiates a SCSI command for the physical drive (the target) associated with /dev/sda with a timeout value of 60, it has 60 seconds to complete the command and if it doesn't, an ABORT sequence is issued to cancel the command. The queue depth gets a little bit more involved in which it limits the total amount of transfers that can be outstanding for a device at a given point. If I have 64 outstanding SCSI commands that need to be issued to /dev/sda and my queue depth is set to 32, I can only service 32 at a time limiting my throughput and thus creating a bottleneck to slow down future transfers. On Linux, queue depth becomes a very hairy topic primarily because it is not adjusted only in the block device parameters but is also defined on the Lower Layer of the SCSI Subsystem where the HBA throttles I/O with its own queue depth values. This will be briefly explained in the next section. Other limitations can be seen on the storage end. The storage controller(s) can handle only so many service requests and in most cases it may be forced to begin issuing ABORTs for anything above its limit. In turn the transfers may be retried from the host side and complete successfully, so a lot of this may not be that apparent to the administrator. Again, it becomes necessary to familiarize oneself with these terms when dealing with mass storage devices.the 2.6 Linux kernel introduced a new virtual 34 LINUX+DVD 3/2009
6 file system to help reduce the clutter that became /proc (for those not familiar with the traditional UNIX file system hierarchy, this was originally intended for process information) with a sysfs file system mounted at /sys. To summarize, /sys contains all registered components to the Operating System's kernel. That is, you will find block devices, networking ports, devices and drivers, etc. mapped from this location and easily accessible from user space for enhanced configuration(s). It is through /sys that we will be able to navigate to the disk device and fine tune it to how we wish to utilize it. After I explain sysfs, I will move onto to describing modules and how a module can be inserted with fine-tuned and pseudo-static parameters. An HBA can also be optimized in pretty much the same fashion as the SCSI device. Although it is worth noting that the parameters that can be adjusted to an HBA are vendor specific. These are additional timeout values, queue depth values, port down retry counts, etc. Some HBAs come with volume and/or path management capabilities. Just simply identify the module name for the device by doing an lsmod (it may also be useful to first identify it usually attached to your PCI bus by executing an lspci). And from that point you should be able to either navigate the /sys /module path or just list all module parameters to that device with a modinfo. Methods of Storage Management Again, the Linux 2.6 kernel has made great advancement in this area. With the introduction of newer file system to even methods of device and volume management, it makes it increasingly easy for a storage administrator to set up a Linux server to perform all necessary tasks. In my personal opinion, the best interface introduced in the 2.6 kernel was the device-mapper framework. Device-mapper is one of the best collection of device drivers that I have ever worked with. It brings high availability, flexibility and more to the Linux 2.6 kernel. Device-mapper is a Linux 2.6 kernel infrastructure that provides a generic way to create virtual layers of a block device while supporting stripping, mirroring, snapshots, concatenation, multipathing, etc. Devicemapper multipath provides the following features: Allows the multivendor Storage RAID systems and host servers equipped with multivendor Host Bus Adapters (HBAs) redundant physical connectivity along the independent Fibre Channel fabric paths available Monitors each path and automatically reroutes (failover) I/O to an available functioning alternate path if an existing connection fails Provides an option to perform fail-back of the LUN to the repaired paths Implements failover or failback actions transparently without disrupting applications Monitors each path and notifies if there is a change in the path status Facilitates the load balancing among the multiple paths Provides CLI with display options to configure and manage Multipath features Provides all Device Mapper Multipath features support for any LUN newly added to the host Provides an option to have customized names for the Device Mapper Multipath devices Provides persistency to the Device Mapper Multipath devices across reboots if there are any change in the Storage Area Network Provides policy based path grouping for the user to customize the I/O flow through specific set of paths Also built on top of the device mapper framework is LVM2 which allows the administrator to utilize those virtual layers of block devices in creating both physical and Listing 4. Listing multipath devices and paths $ multipath -ll bb55555cd logical volumes, with snapshot capabilities. In LVM2 physical and logical volumes can be striped or mirrored to create volume groups. Volume groups can be dynamically resized while active and online. It is with these volume groups that you will write a file system to (if applicable) and mount locally to either manage locally or share within a network. If you are utilizing an older version of the 2.6 kernel, you may be able to use mdadm as a volume and path manager. The File System The file system is one of the most important aspects to storage management. Deciding which file system is best suited for your environment rests solely on the type of computing that is to be done to the volume. Each file system has advantages and disadvantages in certain workloads. So before you tell yourself that you will use ZFS over FUSE, Ext3, XFS or anything else, take some time to understand the differences between the options. Some questions to be asking yourself are: What is relevent? What is to be the size of the file system (i.e. files and directories)? How will the file system be frequently accessed and how much load should I expect it to work with? How recoverable or redundant do I want the file system to be? If I need it to be 100% recoverable, how much performance am I willing to lose? Do I want the application(s) accessing the file system to handling caching or the OS? [size=92 GB][features="1 queue_if_no_path"][hwhandler="0"] \_ round-robin 0 [active] \_ 30:0:0:2 sdc 8:32 [active][ready] \_ round-robin 0 [enabled] \_ 31:0:0:2 sdg 8:96 [active][ready] bb55555cd [size=27 GB][features="1 queue_if_no_path"][hwhandler="0"] \_ round-robin 0 [enabled] \_ 30:0:0:1 sdb 8:16 [active][ready] \_ round-robin 0 [active] \_ 31:0:0:1 sdf 8:80 [active][ready] 35
7 One thing worth noting is that there is no single best file system. Across all different types of file systems you will find differences in: the limitations of file system and file sizes number of files and directories it can store on-disk space efficiency and block(s) allocation methods journaling capabilities and options data consistency crash recovery methods (fsck, journaling, copy-on-write) special features such as direct I/O support, enabled compression and more To quickly summarize some of your local file systems Ext2 is a simple, fast and stable file system and while it is easy to repair, its recovery is extremely slow, which is why the developers introduced the journal in Ext3. Ext3 performs slower than Ext2 primarily because of the journal. Different journaling options are supported but by default the file system makes a copy of all meta data and file data to the journal just before committing them to the appropriate locations on the disk device. This method of double writing costs a lot of time in seek and write operations. It does have a speedy recovery though. Another problem with Ext3 is that its methods of block allocation consume too much space for meta data leaving less room for actual user data. Further Reading Some excellent books come to mind: Linux Kernel Development by Robert Love While ReiserFS is a journaled file system that performs much better and is more space efficient than Ext3, it is known for being a little less unstable and has also been known for poor repairing methods. XFS has been known as one of the better file systems for high performance and high volume computing. It too supports a journal but unlike the Ext3 file system, XFS only journals the meta data of the operations to be performed. It is also an extent based file system that reserves data regions for a file, thus reducing the amount of space wasted in meta data. Data consistency is not as focused here as it is on Ext3. Is this file system to run on an embedded device? For embedded devices, there are usually limitations to the amount of erase/ write operations committed to a flash cell. Which why JFFS2 supports a built in method of wear leveling across the flash device. It is slow but does its job. Performance Tuning Much like anything else, file systems can also be tuned. As I had mentioned earlier things such as journaling options can altered. So can other parameters such as limiting write operations as much as possible, by mounting the device with the mount -o noatime switch. You can also limit the maximum size of read/ write operations when mounting a device. Other performance gains can be dependent on the method of connection between host(s) to target(s), where load Understanding the Linux Kernel by Daniel Bovet and Marco Cesati Solaris Internals(TM): Solaris 10 and OpenSolaris Kernel Architecture by Richard McDougall, Jim Mauro balancing becomes a big problem. How do you balance the load to all Logical Units (LU) making sure that all can get serviced within an appropriate time frame? Fortunately enough, this can be configured through device-mapper and multipathtools. In device-mapper you can configure for the load to be distributed in a roundrobin method of access across all available paths or if set up as failover, you can have I/O run on a single active path at a time and make sure that two separate volumes can travel across two separate paths: see Listing 4. The the above example I have two separate paths for each volume. Each path is going through a Fibre Channel node with a node host number 30 and 31. The first volumes active path is set to pass through host 30 while in the second volume it is the opposite. This way I can have the I/O to both volumes balance across both paths as opposed to limiting traffic across a single path. If an active path were to fail, than I/O would resume in the other path. As mentioned earlier, other performance gains can be achieved at the SCSI subsystem level by modifying specific disk device or HBA parameters. Again, any and all modifications made will play some sort of an impact to the rest of the I/O subsystem so please proceed with caution. Conclusion As one can see, Linux Volume Management is not a simple topic for discussion. A lot is involved when attempting to set up and configure an environment for computing across Linux hosted volumes. As always, before you proceed with any modification review all provided reading materials. It can save you from a lot of future headaches. About the Author Resources Bar, Moshe. Linux File Systems Pate, Steven. UNIX Filesystems. Installation and Reference Guide Device Mapper Multipath Enablement Kit for HP StorageWorks Disk Arrays Wikipedia Articles on Linux File Systems: en.wikipedia.org/wiki/ext3 en.wikipedia.org/wiki/reiserfs en.wikipedia.org/wiki/xfs Petros Koutoupis has been using Linux since 2001 and has been in software development and administration even longer. He has been involved with enterprise storage computing from 2005 to the present and currently offers consultation services in the same field. His Web Site is He can always be contacted at pkoutoupis@h ydrasystemsllc.com. 36 LINUX+DVD 3/2009
Chip Coldwell Senior Software Engineer, Red Hat
Chip Coldwell Senior Software Engineer, Red Hat Classical Storage Stack: the top layer File system examples: ext3, NFS, GFS built on Linux VFS ( Virtual File System abstraction) defines on-disk structure:
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System
CS341: Operating System Lect 36: 1 st Nov 2014 Dr. A. Sahu Dept of Comp. Sc. & Engg. Indian Institute of Technology Guwahati File System & Device Drive Mass Storage Disk Structure Disk Arm Scheduling RAID
Chapter 11 I/O Management and Disk Scheduling
Operating Systems: Internals and Design Principles, 6/E William Stallings Chapter 11 I/O Management and Disk Scheduling Dave Bremer Otago Polytechnic, NZ 2008, Prentice Hall I/O Devices Roadmap Organization
Getting Started With RAID
Dell Systems Getting Started With RAID www.dell.com support.dell.com Notes, Notices, and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer. NOTICE: A
CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study
CS 377: Operating Systems Lecture 25 - Linux Case Study Guest Lecturer: Tim Wood Outline Linux History Design Principles System Overview Process Scheduling Memory Management File Systems A review of what
Filesystems Performance in GNU/Linux Multi-Disk Data Storage
JOURNAL OF APPLIED COMPUTER SCIENCE Vol. 22 No. 2 (2014), pp. 65-80 Filesystems Performance in GNU/Linux Multi-Disk Data Storage Mateusz Smoliński 1 1 Lodz University of Technology Faculty of Technical
Chapter 12: Mass-Storage Systems
Chapter 12: Mass-Storage Systems Chapter 12: Mass-Storage Systems Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management Swap-Space Management RAID Structure
Lecture 18: Reliable Storage
CS 422/522 Design & Implementation of Operating Systems Lecture 18: Reliable Storage Zhong Shao Dept. of Computer Science Yale University Acknowledgement: some slides are taken from previous versions of
TECHNOLOGY BRIEF. Compaq RAID on a Chip Technology EXECUTIVE SUMMARY CONTENTS
TECHNOLOGY BRIEF August 1999 Compaq Computer Corporation Prepared by ISSD Technology Communications CONTENTS Executive Summary 1 Introduction 3 Subsystem Technology 3 Processor 3 SCSI Chip4 PCI Bridge
Abstract. Microsoft Corporation Published: November 2011
Linux Integration Services Version 3.2 for Hyper-V (Windows Server 2008, Windows Server 2008 R2, Microsoft Hyper-V Server 2008, and Microsoft Hyper-V Server 2008 R2) Readme Microsoft Corporation Published:
Chapter 6, The Operating System Machine Level
Chapter 6, The Operating System Machine Level 6.1 Virtual Memory 6.2 Virtual I/O Instructions 6.3 Virtual Instructions For Parallel Processing 6.4 Example Operating Systems 6.5 Summary Virtual Memory General
Using Multipathing Technology to Achieve a High Availability Solution
Using Multipathing Technology to Achieve a High Availability Solution Table of Contents Introduction...3 Multipathing Technology...3 Multipathing I/O Implementations...5 Storage Redundancy...5 Infortrend
Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.
Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance
Distribution One Server Requirements
Distribution One Server Requirements Introduction Welcome to the Hardware Configuration Guide. The goal of this guide is to provide a practical approach to sizing your Distribution One application and
SATA RAID Function (Only for chipset Sil3132 used) User s Manual
SATA RAID Function (Only for chipset Sil3132 used) User s Manual 12ME-SI3132-001 Table of Contents 1 WELCOME...4 1.1 SATARAID5 FEATURES...4 2 AN INTRODUCTION TO RAID...5 2.1 DISK STRIPING (RAID 0)...5
Introduction to Gluster. Versions 3.0.x
Introduction to Gluster Versions 3.0.x Table of Contents Table of Contents... 2 Overview... 3 Gluster File System... 3 Gluster Storage Platform... 3 No metadata with the Elastic Hash Algorithm... 4 A Gluster
Chapter 11 I/O Management and Disk Scheduling
Operatin g Systems: Internals and Design Principle s Chapter 11 I/O Management and Disk Scheduling Seventh Edition By William Stallings Operating Systems: Internals and Design Principles An artifact can
Support for Storage Volumes Greater than 2TB Using Standard Operating System Functionality
Support for Storage Volumes Greater than 2TB Using Standard Operating System Functionality Introduction A History of Hard Drive Capacity Starting in 1984, when IBM first introduced a 5MB hard drive in
HP StorageWorks Modular Smart Array 1000 Small Business SAN Kit Hardware and Software Demonstration
Presenter Name/Title: Frank Arrazate, Engineering Project Manager Hardware Installation Hi, my name is Frank Arrazate. I am with Hewlett Packard Welcome to the hardware and software installation session
Configuring Linux to Enable Multipath I/O
Configuring Linux to Enable Multipath I/O Storage is an essential data center component, and storage area networks can provide an excellent way to help ensure high availability and load balancing over
Outline. Failure Types
Outline Database Management and Tuning Johann Gamper Free University of Bozen-Bolzano Faculty of Computer Science IDSE Unit 11 1 2 Conclusion Acknowledgements: The slides are provided by Nikolaus Augsten
Hitachi Path Management & Load Balancing with Hitachi Dynamic Link Manager and Global Link Availability Manager
Hitachi Data System s WebTech Series Hitachi Path Management & Load Balancing with Hitachi Dynamic Link Manager and Global Link Availability Manager The HDS WebTech Series Dynamic Load Balancing Who should
Using RAID Admin and Disk Utility
Using RAID Admin and Disk Utility Xserve RAID Includes instructions for creating RAID arrays and monitoring Xserve RAID systems K Apple Computer, Inc. 2003 Apple Computer, Inc. All rights reserved. Under
Storage Fusion Architecture. Multipath (v1.8) User Guide
Storage Fusion Architecture Multipath (v1.8) User Guide Table of Contents... 2 1. OVERVIEW... 3 2. DDN MULTIPATH RPM VERSIONS... 3 3. INSTALLATION... 4 3.1 RPM INSTALLATION... 4 3.1.1 DETERMINING THE LINUX
Linux Driver Devices. Why, When, Which, How?
Bertrand Mermet Sylvain Ract Linux Driver Devices. Why, When, Which, How? Since its creation in the early 1990 s Linux has been installed on millions of computers or embedded systems. These systems may
Q & A From Hitachi Data Systems WebTech Presentation:
Q & A From Hitachi Data Systems WebTech Presentation: RAID Concepts 1. Is the chunk size the same for all Hitachi Data Systems storage systems, i.e., Adaptable Modular Systems, Network Storage Controller,
DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering
DELL RAID PRIMER DELL PERC RAID CONTROLLERS Joe H. Trickey III Dell Storage RAID Product Marketing John Seward Dell Storage RAID Engineering http://www.dell.com/content/topics/topic.aspx/global/products/pvaul/top
TELE 301 Lecture 7: Linux/Unix file
Overview Last Lecture Scripting This Lecture Linux/Unix file system Next Lecture System installation Sources Installation and Getting Started Guide Linux System Administrators Guide Chapter 6 in Principles
Introduction. What is RAID? The Array and RAID Controller Concept. Click here to print this article. Re-Printed From SLCentral
Click here to print this article. Re-Printed From SLCentral RAID: An In-Depth Guide To RAID Technology Author: Tom Solinap Date Posted: January 24th, 2001 URL: http://www.slcentral.com/articles/01/1/raid
Storage and File Systems. Chester Rebeiro IIT Madras
Storage and File Systems Chester Rebeiro IIT Madras 1 Two views of a file system system calls protection rwx attributes Application View Look & Feel File system Hardware view 2 Magnetic Disks Chester Rebeiro
Windows Server Performance Monitoring
Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly
FlexArray Virtualization
Updated for 8.2.1 FlexArray Virtualization Installation Requirements and Reference Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support
Linux Software Raid. Aug 2010. Mark A. Davis
Linux Software Raid Aug 2010 Mark A. Davis a What is RAID? Redundant Array of Inexpensive/Independent Drives It is a method of combining more than one hard drive into a logic unit for the purpose of: Increasing
Post Production Video Editing Solution Guide with Apple Xsan File System AssuredSAN 4000
Post Production Video Editing Solution Guide with Apple Xsan File System AssuredSAN 4000 Dot Hill Systems introduction 1 INTRODUCTION Dot Hill Systems offers high performance network storage products that
HP-UX System and Network Administration for Experienced UNIX System Administrators Course Summary
Contact Us: (616) 875-4060 HP-UX System and Network Administration for Experienced UNIX System Administrators Course Summary Length: Classroom: 5 days Virtual: 6 hrs/day - 5 days Prerequisite: System Administration
Storage Networking Management & Administration Workshop
Storage Networking Management & Administration Workshop Duration: 2 Days Type: Lecture Course Summary & Description Achieving SNIA Certification for storage networking management and administration knowledge
Linux Integration Services 3.4 for Hyper-V Readme
Linux Integration Services 3.4 for Hyper-V Readme Microsoft Corporation Published: September 2012 Abstract This guide discusses the installation and functionality of Linux Integration Services for Hyper-V
Violin Memory Arrays With IBM System Storage SAN Volume Control
Technical White Paper Report Best Practices Guide: Violin Memory Arrays With IBM System Storage SAN Volume Control Implementation Best Practices and Performance Considerations Version 1.0 Abstract This
HP Data Protector Software
HP Data Protector Software Performance White Paper Executive summary... 3 Overview... 3 Objectives and target audience... 4 Introduction and review of test configuration... 4 Storage array... 5 Storage
The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000)
The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000) IntelliMagic, Inc. 558 Silicon Drive Ste 101 Southlake, Texas 76092 USA Tel: 214-432-7920
InForm OS 2.2.3/2.2.4 VMware ESX Server 3.0-4.0 QLogic/Emulex HBA Implementation Guide
InForm OS 2.2.3/2.2.4 VMware ESX Server 3.0-4.0 QLogic/Emulex HBA Implementation Guide InForm OS 2.2.3/2.2.4 VMware ESX Server 3.0-4.0 FC QLogic/Emulex HBA Implementation Guide In this guide 1.0 Notices
Update: About Apple RAID Version 1.5 About this update
apple Update: About Apple RAID Version 1.5 About this update This update describes new features and capabilities of Apple RAID Software version 1.5, which includes improvements that increase the performance
Violin: A Framework for Extensible Block-level Storage
Violin: A Framework for Extensible Block-level Storage Michail Flouris Dept. of Computer Science, University of Toronto, Canada [email protected] Angelos Bilas ICS-FORTH & University of Crete, Greece
Why Computers Are Getting Slower (and what we can do about it) Rik van Riel Sr. Software Engineer, Red Hat
Why Computers Are Getting Slower (and what we can do about it) Rik van Riel Sr. Software Engineer, Red Hat Why Computers Are Getting Slower The traditional approach better performance Why computers are
RAID installation guide for ITE8212F
RAID installation guide for ITE8212F Contents Contents 2 1 Introduction 3 1.1 About this Guide 3 1.2 The Basics 3 1.2.1 What is RAID? 3 1.2.2 Advantages of RAID 3 1.2.3 Disadvantages of RAID 3 1.3 Different
LOCKSS on LINUX. CentOS6 Installation Manual 08/22/2013
LOCKSS on LINUX CentOS6 Installation Manual 08/22/2013 1 Table of Contents Overview... 3 LOCKSS Hardware... 5 Installation Checklist... 6 BIOS Settings... 9 Installation... 10 Firewall Configuration...
Post-production Video Editing Solution Guide with Quantum StorNext File System AssuredSAN 4000
Post-production Video Editing Solution Guide with Quantum StorNext File System AssuredSAN 4000 Dot Hill Systems introduction 1 INTRODUCTION Dot Hill Systems offers high performance network storage products
Chapter 2 Array Configuration [SATA Setup Utility] This chapter explains array configurations using this array controller.
Embedded MegaRAID SATA User's Guide Areas Covered Before Reading This Manual This section explains the notes for your safety and conventions used in this manual. Chapter 1 Overview This chapter introduces
HP Smart Array Controllers and basic RAID performance factors
Technical white paper HP Smart Array Controllers and basic RAID performance factors Technology brief Table of contents Abstract 2 Benefits of drive arrays 2 Factors that affect performance 2 HP Smart Array
Operating Systems 4 th Class
Operating Systems 4 th Class Lecture 1 Operating Systems Operating systems are essential part of any computer system. Therefore, a course in operating systems is an essential part of any computer science
What is involved in input/output? How does the operating system manage I/O?
Input/Output Input and Output Reference: Operating Systems, Fourth Edition, William Stallings, Chapters 11, 12 Nick Urbanik Copyright Conditions: GNU FDL (seehttp://www.gnu.org/licenses/fdl.html)
Enabling Multi-pathing on ESVA with Red Hat Enterprise Linux 6 Device Mapper
Enabling Multi-pathing on ESVA with Red Hat Enterprise Linux 6 Device Mapper Application Note Abstract This document describes how to enable multi-pathing configuration using the Device Mapper service
Technical Paper. Performance and Tuning Considerations for SAS on Pure Storage FA-420 Flash Array
Technical Paper Performance and Tuning Considerations for SAS on Pure Storage FA-420 Flash Array Release Information Content Version: 1.0 August 2014. Trademarks and Patents SAS Institute Inc., SAS Campus
Operating Systems. Notice that, before you can run programs that you write in JavaScript, you need to jump through a few hoops first
Operating Systems Notice that, before you can run programs that you write in JavaScript, you need to jump through a few hoops first JavaScript interpreter Web browser menu / icon / dock??? login??? CPU,
Wireless ATA: A New Data Transport Protocol for Wireless Storage
Wireless ATA: A New Data Transport Protocol for Wireless Storage Serdar Ozler and Ibrahim Korpeoglu Department of Computer Engineering, Bilkent University, 06800 Bilkent, Ankara, Turkey {ozler, korpe}@cs.bilkent.edu.tr
Virtuoso and Database Scalability
Virtuoso and Database Scalability By Orri Erling Table of Contents Abstract Metrics Results Transaction Throughput Initializing 40 warehouses Serial Read Test Conditions Analysis Working Set Effect of
Practical issues in DIY RAID Recovery
www.freeraidrecovery.com Practical issues in DIY RAID Recovery Based on years of technical support experience 2012 www.freeraidrecovery.com This guide is provided to supplement our ReclaiMe Free RAID Recovery
Optimizing LTO Backup Performance
Optimizing LTO Backup Performance July 19, 2011 Written by: Ash McCarty Contributors: Cedrick Burton Bob Dawson Vang Nguyen Richard Snook Table of Contents 1.0 Introduction... 3 2.0 Host System Configuration...
Storage. The text highlighted in green in these slides contain external hyperlinks. 1 / 14
Storage Compared to the performance parameters of the other components we have been studying, storage systems are much slower devices. Typical access times to rotating disk storage devices are in the millisecond
NetApp Software. SANtricity Storage Manager Concepts for Version 11.10. NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.
NetApp Software SANtricity Storage Manager Concepts for Version 11.10 NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1
File Management. Chapter 12
Chapter 12 File Management File is the basic element of most of the applications, since the input to an application, as well as its output, is usually a file. They also typically outlive the execution
Storage Management for the Oracle Database on Red Hat Enterprise Linux 6: Using ASM With or Without ASMLib
Storage Management for the Oracle Database on Red Hat Enterprise Linux 6: Using ASM With or Without ASMLib Sayan Saha, Sue Denham & Lars Herrmann 05/02/2011 On 22 March 2011, Oracle posted the following
These application notes are intended to be a guide to implement features or extend the features of the Elastix IP PBX system.
Elastix Application Note #201201091: Elastix RAID Setup Step By Step Including Recovery Title Elastix Raid Setup Step By Step Including Recovery Author Bob Fryer Date Document Written 9 th January 2012
Lesson Objectives. To provide a grand tour of the major operating systems components To provide coverage of basic computer system organization
Lesson Objectives To provide a grand tour of the major operating systems components To provide coverage of basic computer system organization AE3B33OSD Lesson 1 / Page 2 What is an Operating System? A
WebBIOS Configuration Utility Guide
Dell PowerEdge Expandable RAID Controller 3/QC, 3/DC, 3/DCL and 3/SC WebBIOS Configuration Utility Guide www.dell.com support.dell.com Information in this document is subject to change without notice.
CONFIGURATION BEST PRACTICES FOR MICROSOFT SQL SERVER AND EMC SYMMETRIX VMAXe
White Paper CONFIGURATION BEST PRACTICES FOR MICROSOFT SQL SERVER AND EMC SYMMETRIX VMAXe Simplified configuration, deployment, and management for Microsoft SQL Server on Symmetrix VMAXe Abstract This
RAID EzAssist Configuration Utility Quick Configuration Guide
RAID EzAssist Configuration Utility Quick Configuration Guide DB15-000277-00 First Edition 08P5520 Proprietary Rights Notice This document contains proprietary information of LSI Logic Corporation. The
File System Management
Lecture 7: Storage Management File System Management Contents Non volatile memory Tape, HDD, SSD Files & File System Interface Directories & their Organization File System Implementation Disk Space Allocation
Sawmill Log Analyzer Best Practices!! Page 1 of 6. Sawmill Log Analyzer Best Practices
Sawmill Log Analyzer Best Practices!! Page 1 of 6 Sawmill Log Analyzer Best Practices! Sawmill Log Analyzer Best Practices!! Page 2 of 6 This document describes best practices for the Sawmill universal
Linux Filesystem Performance Comparison for OLTP with Ext2, Ext3, Raw, and OCFS on Direct-Attached Disks using Oracle 9i Release 2
Linux Filesystem Performance Comparison for OLTP with Ext2, Ext3, Raw, and OCFS on Direct-Attached Disks using Oracle 9i Release 2 An Oracle White Paper January 2004 Linux Filesystem Performance Comparison
LOCKSS on LINUX. Installation Manual and the OpenBSD Transition 02/17/2011
LOCKSS on LINUX Installation Manual and the OpenBSD Transition 02/17/2011 1 Table of Contents Overview... 3 LOCKSS Hardware... 5 Installation Checklist... 7 BIOS Settings... 10 Installation... 11 Firewall
CSC 2405: Computer Systems II
CSC 2405: Computer Systems II Spring 2013 (TR 8:30-9:45 in G86) Mirela Damian http://www.csc.villanova.edu/~mdamian/csc2405/ Introductions Mirela Damian Room 167A in the Mendel Science Building [email protected]
Managing Data Protection
Managing Data Protection with Red Hat Linux and Dell PowerVault Tape Autoloaders and Libraries Protecting data from disaster is a critical concern for any enterprise. This article explores Dell PowerVault
Areas Covered. Chapter 1 Features (Overview/Note) Chapter 2 How to Use WebBIOS. Chapter 3 Installing Global Array Manager (GAM)
PRIMERGY RX300 S2 Onboard SCSI RAID User s Guide Areas Covered Chapter 1 Features (Overview/Note) This chapter explains the overview of the disk array and features of the SCSI array controller. Chapter
Microsoft SQL Server Always On Technologies
Microsoft SQL Server Always On Technologies Hitachi Data Systems Contributes Always On Storage Solutions A Partner Solutions White Paper By Rick Andersen and Simon Pengelly December 2006 Executive Summary
PADS GPFS Filesystem: Crash Root Cause Analysis. Computation Institute
PADS GPFS Filesystem: Crash Root Cause Analysis Computation Institute Argonne National Laboratory Table of Contents Purpose 1 Terminology 2 Infrastructure 4 Timeline of Events 5 Background 5 Corruption
Abstract. Microsoft Corporation Published: August 2009
Linux Integration Components Version 2 for Hyper-V (Windows Server 2008, Windows Server 2008 R2, Microsoft Hyper-V Server 2008, and Microsoft Hyper-V Server 2008 R2) Readme Microsoft Corporation Published:
The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000
The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000 Summary: This document describes how to analyze performance on an IBM Storwize V7000. IntelliMagic 2012 Page 1 This
HBA Virtualization Technologies for Windows OS Environments
HBA Virtualization Technologies for Windows OS Environments FC HBA Virtualization Keeping Pace with Virtualized Data Centers Executive Summary Today, Microsoft offers Virtual Server 2005 R2, a software
Best practices for Implementing Lotus Domino in a Storage Area Network (SAN) Environment
Best practices for Implementing Lotus Domino in a Storage Area Network (SAN) Environment With the implementation of storage area networks (SAN) becoming more of a standard configuration, this paper describes
Hypertable Architecture Overview
WHITE PAPER - MARCH 2012 Hypertable Architecture Overview Hypertable is an open source, scalable NoSQL database modeled after Bigtable, Google s proprietary scalable database. It is written in C++ for
Survey of Filesystems for Embedded Linux. Presented by Gene Sally CELF
Survey of Filesystems for Embedded Linux Presented by Gene Sally CELF Presentation Filesystems In Summary What is a filesystem Kernel and User space filesystems Picking a root filesystem Filesystem Round-up
IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org
IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator
Taking Linux File and Storage Systems into the Future. Ric Wheeler Director Kernel File and Storage Team Red Hat, Incorporated
Taking Linux File and Storage Systems into the Future Ric Wheeler Director Kernel File and Storage Team Red Hat, Incorporated 1 Overview Going Bigger Going Faster Support for New Hardware Current Areas
Chapter 3: Operating-System Structures. System Components Operating System Services System Calls System Programs System Structure Virtual Machines
Chapter 3: Operating-System Structures System Components Operating System Services System Calls System Programs System Structure Virtual Machines Operating System Concepts 3.1 Common System Components
Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V
Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Implementation Guide By Eduardo Freitas and Ryan Sokolowski February 2010 Summary Deploying
Data Storage II. Prof. Ing. Pavel Tvrdík CSc. Ing. Jiří Kašpar
Pavel Tvrdík, Jiří Kašpar (ČVUT FIT) Data Storage II. MI-POA, 2011, Lecture 4 1/22 Data Storage II. Prof. Ing. Pavel Tvrdík CSc. Ing. Jiří Kašpar Department of Computer Systems Faculty of Information Technology
Red Hat Cluster Suite
Red Hat Cluster Suite HP User Society / DECUS 17. Mai 2006 Joachim Schröder Red Hat GmbH Two Key Industry Trends Clustering (scale-out) is happening 20% of all servers shipped will be clustered by 2006.
EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage
EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL
COS 318: Operating Systems
COS 318: Operating Systems File Performance and Reliability Andy Bavier Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall10/cos318/ Topics File buffer cache
VTrak A-Class. 1.08.0000.00 Release Notes R1.00
VTrak A-Class 1.08.0000.00 Release Notes 1. Release Summary This is a Service Release for the A-Class Shared Storage Appliance with firmware version 1.08.0000.00. It supports Microsoft Windows and Apple
ExpressSAS Host Adapter 6Gb v2.15 Linux
Product Release Notes ExpressSAS Host Adapter 6Gb v2.15 Linux 1. General Release Information These product release notes define the new features, changes, known issues and release details that apply to
Virtual SAN Design and Deployment Guide
Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore
How To Set Up Software Raid In Linux 6.2.2 (Amd64)
Software RAID on Red Hat Enterprise Linux v6 Installation, Migration and Recovery November 2010 Ashokan Vellimalai Raghavendra Biligiri Dell Enterprise Operating Systems THIS WHITE PAPER IS FOR INFORMATIONAL
CHAPTER 15: Operating Systems: An Overview
CHAPTER 15: Operating Systems: An Overview The Architecture of Computer Hardware, Systems Software & Networking: An Information Technology Approach 4th Edition, Irv Englander John Wiley and Sons 2010 PowerPoint
