Technical White Paper Report Technical Report Running Oracle 11g RAC on Violin Installation Best Practices for Oracle 11gR2 RAC and Linux 5.x Version 1.0 Abstract This technical report describes the process for installing Oracle 11gR2 RAC on a Linux 5.x operating system and configuring the software to use Violin Memory arrays. For the purpose of this document, Oracle 11.2.0.3 and Oracle Enterprise Linux 5.7 was used.
Contents 1 Introduction... 3 1.1 Purpose and Scope... 3 1.2 Intended Audience... 3 1.3 Terminology... 3 1.4 Key Recommendations... 4 1.5 Additional Resources... 4 2 LUN Setup... 4 2.1 Defining LUN Data Block Sizes... 4 3 Linux OS Configuration and Oracle Pre-installation... 5 3.1 Enabling YUM... 5 3.2 Enabling the Oracle Preinstall RPM... 5 4 Multipathing Software Setup... 6 5 Adjusting I/O Scheduling Properties and Permissions... 7 5.1 Creating UDEV Rules... 8 5.2 Add Multipathing Aliases... 8 6 Configuring Oracle ASM... 10 6.1 Installing ASMLib... 10 6.2 Installing Grid Infrastructure... 11 6.3 Creating ASM Disk Groups... 13 7 Configuring the Oracle Database... 15 7.1 Installing the Oracle Software Package... 15 7.2 Creating a Database Using the Database Configuration Assistant... 16 7.3 Editing the DBCA Creation Scripts... 17 2
1 Introduction This document describes the process for installing an Oracle 11gR2 RAC database on to a Red Hat 5.x compatible operating system in order to use Violin Memory flash storage. RAC stands for Real Application Clusters and is Oracle s clustering product. For the purpose of this document Oracle 11.2.0.3 and Oracle Enterprise Linux 5.7 was used. In order to take advantage of the extreme performance characteristics of Violin Memory flash storage the Oracle Automatic Storage Management (ASM) volume manager will be used to achieve raw performance (as opposed to using a file system). 1.1 Purpose and Scope This document describes the recommended steps necessary to complete the installation process, along with examples and expected outputs. Experienced users who may find this level of detail unnecessary should read Section 1.4, Key Recommendations, which serves as a quick start list of high-level steps to follow. This report describes the process for building a generic system and is not intended to address individual customer requirements for security, performance, resilience and other operational aspects that may be relevant. Organizations with existing operational guidelines should treat those guidelines with higher priority. If and where any recommendations in this document conflict with existing policies, adhere to the existing policies. Violin Memory cannot accept liability for issues that may occur as a result of following these recommendations. 1.2 Intended Audience This document is intended for Oracle database administrators and Linux system administrators who want to implement Oracle 11g using Violin flash memory arrays for primary data storage. This document assumes that readers have prior knowledge of Oracle and Linux software installation and configuration. 1.3 Terminology NOOP The NOOP scheduler is a simple FIFO queue that uses the minimal amount of CPU/instructions per I/O operation to accomplish the basic merging and sorting functionality to complete the I/O operations. Oracle ASM ASM is a volume manager and a file system for Oracle database files that supports singleinstance Oracle Database and Oracle Real Application Clusters (Oracle RAC) configurations. Oracle Clusterware Clusterware is software that enables servers to operate together as if they are one server. Each server in a cluster possesses additional processes that communicate with each other so the separate servers appear as if they are one server to applications and end users. RPM RedHat Package Manager (RPM) is the package management system used for packaging in the Linux Standard Base (LSB). RPM command options are grouped into three subgroups for querying, verifying, installing, upgrading, and removing Linux packages. vshare Violin Memory vshare is a solution for block storage management. vshare runs as software on the memory gateway, enabling host systems (for example, database servers) to use the iscsi and Fibre Channel (FC) transport protocols to access logical units of data (LUNs) stored within Violin arrays. UDEV UDEV is a replacement for the Device File System (DevFS) starting with the Linux 2.6 kernel series. It allows you to identify devices dynamically based on their properties such as vendor ID and device ID. UDEV runs in user space (as opposed to devfs, which is executed in kernel space). 3
YUM YellowDog Updater Modified (YUM) is a package manager that searches numerous repositories for packages and their dependencies so they may be installed together in an effort to alleviate dependency issues. Oracle Linux uses yum to fetch packages and install RPMs. 1.4 Key Recommendations This section shows a high-level summary of the steps required to complete the installation: 1. Create LUNs using Violin Memory vshare. 2. Setup YUM and install the oracle-rdbms-server-11gr2-preinstall package. 3. Install and configure the device mapper multipathing software note specific device requirements when adding entries into the multipath.conf file for Violin arrays. 4. Create UDEV rules to handle LUNs presented from Violin Memory arrays note device-specific configuration settings for these UDEV rules. 5. Add aliases in the multipath.conf file for each LUN presented from Violin Memory arrays 6. Install and configure ASMLIB. 7. Install Oracle Grid Infrastructure (RAC). 8. Install Oracle Database software. 9. Use DBCA to generate database-creation scripts and amend redo log sector size to 4k. 1.5 Additional Resources Oracle Database SQL Language Reference http://docs.oracle.com/cd/e11882_01/server.112/e26088/clauses004.htm#chdbfajh Oracle Grid Infrastructure Installation Guide http://docs.oracle.com/cd/e11882_01/install.112/e22489/toc.htm Violin Best Practices: Optimizing Oracle Data Block Sizes http://info.violin-memory.com/oow-wp-offer.html?paramname=oracleblocksizewp_web 2 LUN Setup With Oracle RAC, you must ensure that there is a suitable LUN configuration for the Voting disks and OCR files. These files need to be stored in separate LUNs to the data, which you can do in Violin Memory vshare by creating the following items with a 512-byte sector size: 1. 3 x 1G LUNs for Voting disks and OCR files 2. Remaining data LUNs as normal 2.1 Defining LUN Data Block Sizes When creating and exporting LUNs from Violin Memory vshare, an option exists to define the block size of the LUN, as shown in Figure 1. This option changes the way in which the LUN appears to the operating system, either as having 512B or 4k sector size. It does not change the real sector size of the LUN, which remains at 4k, but instead influences the operating system s view of the sector size. Unless you are using Oracle Unbreakable Linux with ASMLIB, always choose 512B from vshare. When using Linux kernels lower than 2.6.32, the 512BS option should always be chosen because support for 4k sectors was not introduced until the 2.6.32 kernel release. 4
Figure 1. Block Size Selection During LUN Creation 3 Linux OS Configuration and Oracle Pre-installation This section describes the recommended steps for configuring the operating system to use YUM and installing the various packages required to run Oracle Database. 3.1 Enabling YUM The YUM utility is a command-line package management tool available in Oracle, RedHat and other Linux distributions. To set up YUM, following the steps below: 1. Download the YUM repository. # cd /etc/yum.repos.d/ # wget http://public-yum.oracle.com/public-yum-el5.repo 2. Edit repo file for correct OS. # vi public-yum-el5.repo 3. Find your Linux version and change enabled=0 to enabled=1. In this case [el5_u1_base] was changed. 4. Confirm that yum is working. # yum repolist 3.2 Enabling the Oracle Preinstall RPM The oracle-rdbms-server-11gr2-preinstall RPM can be installed using either YUM or up2date. The RPM automatically sets up all the prerequisites necessary for Oracle 11gR2, including all relevant packages, kernel parameters and users/groups. The following procedure uses YUM as an example. 1. Install oracle-validated rpm. # yum install oracle-validated -y 2. Change Oracle user password. 5
# passwd oracle To check the successful outcome of the oracle-validated installation, check the logfile located at /var/log/oracle-validated/results/orakernel.log. For role-separated RAC installation with a grid user, create this manually as per the Oracle Grid Infrastructure installation guide. 4 Multipathing Software Setup This section describes the recommended steps for installing and configuring Device Mapper Multipathing to provide reliability and performance. Note: If the Violin Memory array is virtualized behind software such as EMC VPLEX or IBM SVC, follow IBM or EMC best practices to ensure correct multipathing configuration. Multipathing software allows for resilience and performance benefits to be gained when multiple paths exist between storage devices and servers. The software is used to detect which duplicate paths correspond to each underlying physical device and creates a virtual device for each physical LUN. The primary benefit of this virtual device is that any underlying path failure can be tolerated provided there is at least one remaining path available. The multipathing software is able to detect failed paths and re-issue any failed I/O requests on a remaining active path in a manner that is transparent to the caller. This transparency is essential for Oracle software such as ASM and the database because they are unaware of its existence and have no built-in functionality to perform the same task. An additional benefit of multipathing software is the performance increase, which can be gained by spreading I/O requests over numerous underlying paths. This is of particular importance when using high-performance storage such as Violin Memory arrays. The following procedure enumerates the process for configuring Device Mapper Multipathing: 1. Confirm that the multipath packages are installed. # yum list device-mapper device-mapper-multipath Sample output: Installed Packages device-mapper.x86_64 1.02.74-10.el5 device-mapper-multipath.x86_64 0.4.9-56.el5_3.1 2. If there are no packages, run the following command to install them: # yum install device-mapper device-mapper-multipath 3. Enable multipath on server start: # chkconfig multipathd on # chkconfig --list multipathd multipathd 0:off 1:off 2:on 3:on 4:on 5:on 6:off 4. Create multipath.conf file: # cp /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf /etc 5. Whitelist the Violin Memory device: # vi /etc/multipath.conf blacklist { devnode "*"} 6
blacklist_exceptions { devnode "sd*"} defaults { devices { device { vendor product user_friendly_names yes} "VIOLIN" "SAN ARRAY" path_grouping_policy group_by_serial getuid_callout "/sbin/scsi_id --whitelisted --replace-whitespace -- page=0x80 --device=/dev/%n" hardware_handler "0" features fast_io_fail_tmo 5 dev_loss_tmo 30 failback rr_weight no_path_retry path_checker rr_min_io 4 "1 queue_if_no_path" immediate uniform fail tur path_selector "round-robin 0" 6. Load multipath details: } # modprobe dm-multipath } # modprobe dm-round-robin # multipath -v2 You should now see the multipath devices listed. The listing shows the details of devices discovered by the multipath software. Initially, these have names such as mpath1 and mpath2, so the multipath.conf file must be updated to add entries for each device, which is covered in a later step. 5 Adjusting I/O Scheduling Properties and Permissions The I/O scheduler determines the way in which block I/O operations are submitted to storage. A common theme in scheduler options behavior is the aim to reduce the impact of hard drive seek time. Most I/O schedulers work by assigning I/O operations into queues and then reordering them to reduce the amount of time that the disk head spends moving to each location. NAND Flash memory has no issues with seek times and exhibits latencies that are frequently less than one millisecond, so there is no gain to be had by using 7
either of these schedulers. Tests have consistently shown a significant increase in performance when switching to the NOOP scheduler. This UDEV rule will also set the correct permissions for the LUNs required by Oracle. 5.1 Creating UDEV Rules A new UDEV rule must be created to set best practice configuration for the I/O scheduler. 1. Create 60-vshare.rules file: # cd /etc/udev/rules.d/ # vi /etc/udev/rules.d/12-violin.rules 2. Add the following content. This rule will also set the correct permissions: KERNEL=="sd*[!0-9] sg*", BUS=="scsi", SYSFS{vendor}=="VIOLIN", SYSFS{model}=="SAN ARRAY*", RUN+="/bin/sh -c 'echo noop > /sys/$devpath/queue/scheduler && echo 1024 > /sys/$devpath/queue/nr_requests'" Note: If the disks are virtualized by software such as VMware, SVC or EMC VPLEX then the UDEV rule will need to be edited appropriately as the vendor and model ID will be masked by the virtualization. For the nr_requests, use 1024 for higher IOPs and 32 or 64 for lower latency. 3. Reload the UDEV Rules file: # udevadm control --reload-rules # udevadm trigger 4. Check the new rules have worked for the sd* devices: # cat /sys/block/sdb/queue/scheduler [noop] anticipatory deadline cfq # cat /sys/block/sdb/queue/nr_requests 1024 5. Restart the multipath daemon: # service multipathd start 5.2 Add Multipathing Aliases By default the multipath virtual devices will have names in the format /dev/mapper/mpath<number>. For better manageability, Violin Memory recommends renaming these devices to names that are more obviously associated with their corresponding target. Possible naming conventions include the use of the array name or the intended ASM disk use (e.g. DATA1 ). If the intention is not to use the ASMLib kernel library then these devices will need to be explicitly configured in the multipath.conf file in order to change the ownership, group and permissions to make them writeable by the owner of the ASM software. Each LUN presented from Violin Memory has a unique identifier. These identifiers are used to create the userfriendly aliases in the multipath configuration file, so a list of the existing LUNs needs to be used. 1. Identify LUN unique names: # multipath ll 2. Use the highlighted names shown: mpath1 (SVIOLIN_SAN_ARRAY_BAFEFD2A8C2E2817) dm-2 VIOLIN,SAN ARRAY 8
size=100g features='0' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=enabled - 8:0:0:1 sdb 8:16 active ready running `- 6:0:0:1 sdc 8:32 active ready running mpath2 (SVIOLIN_SAN_ARRAY_BAFEFD2ACE3C949E) dm-3 VIOLIN,SAN ARRAY size=100g features='0' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=enabled - 8:0:0:2 sdd 8:48 active ready running `- 6:0:0:2 sde 8:64 active ready running < output truncated > 3. Based on these values, entries should be added to the multipath.conf file as shown below: # vi /etc/multipath.conf Sample: multipaths { multipath { wwid SVIOLIN_SAN_ARRAY_BAFEFD2A8C2E2817 alias violin_lun1 } multipath { wwid SVIOLIN_SAN_ARRAY_BAFEFD2ACE3C949E alias violin_lun2 } <...etc...> 4. Flush the device mapper: # multipath -F # multipath -v2 Sample output showing multipath.conf entries have worked: create: violin_lun1 (SVIOLIN_SAN_ARRAY_BAFEFD2A8C2E2817) undef VIOLIN,SAN ARRAY size=100g features='0' hwhandler='0' wp=undef `-+- policy='round-robin 0' prio=1 status=undef - 8:0:0:1 sdb 8:16 undef ready running `- 6:0:0:1 sdc 8:32 undef ready running < output truncated > 5. Check devices have been renamed: # ls -l /dev/mapper/violin* 9
Note: If permissions were set in multipath.conf, the devices will be set accordingly. If ASMLIB is used, the ownership will remain as root:root. 6 Configuring Oracle ASM This section describes how to configure of the ASMLib kernel driver, as well as how to install and configure Grid Infrastructure software to use Violin Memory with a 4k sector size. 6.1 Installing ASMLib In this example ASMLib 1 will be installed using yum, although alternative methods include the use of up2date or the rpm tool if local copies of the package are available. You have to find the correct ASMLIB kernel library for your operating system. This example uses oracleasm-2.6.18-274.el5.x86_64.rpm. Check kernel to download corrent ASMLIB RPM # uname -a Sample output: Linux rac1a 2.6.18-274.el5 #1 SMP Mon Jul 25 13:17:49 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux Install ASMLIB libraries Configure ASMLIB Do on all nodes: # yum install oracleasm-support -y # wget http://download.oracle.com/otn_software/asmlib/oracleasmlib-2.0.4-1.el5.x86_64.rpm # yum install oracleasm-2.6.18-274.el5.x86_64.rpm y < download the correct one for your kernel if different # yum localinstall oracleasmlib-2.0.4-1.el5.x86_64.rpm y Do on all nodes: # /etc/init.d/oracleasm configure Default user to own the driver interface []: oracle Default group to own the driver interface []: dba Start Oracle ASM library driver on boot (y/n) [n]: y Scan for Oracle ASM disks on boot (y/n) [y]: y In this example oracle is the owner of the ASM software and dba is the group for a role separation installation this would typically be grid and asmadmin instead. 1 ASMLib is a kernel library developed by Oracle to manage device discovery for ASM. It is an optional but useful tool which is available only for the Linux operating system. 10 10
Add content to /etc/sysconfig/oracleasm Do on all nodes: # vi /etc/sysconfig/oracleasm and add the following: # ORACLEASM_SCANORDER: Matching patterns to order disk scanning ORACLEASM_SCANORDER="dm" # ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan ORACLEASM_SCANEXCLUDE="sd" Restart ASMLIB Stamp the LUNs in ASMLIB (do for all LUNs including OCR/Voting 3x1G) Do on all nodes: # /etc/init.d/oracleasm restart Do on node 1 only: To stamp and individual LUN: # oracleasm createdisk DISKNAME /dev/mapper/lunname e.g. # oracleasm createdisk RECO /dev/mapper/violin_lun9 To bulk stamp LUNs: # for lun in `echo 1 2 3 4 5 6 7 8`; do > oracleasm createdisk DISKNAME${lun} /dev/mapper/lunname${lun} > done e.g. # for lun in `echo 1 2 3 4 5 6 7 8`; do > oracleasm createdisk DATA${lun} /dev/mapper/violin_lun${lun} > done Scan disks for other nodes Do on all nodes apart from node 1: # oracleasm scandisks 6.2 Installing Grid Infrastructure At this point, it is assumed that the RAC pre-requisites such as networking, SSH equivalence, hosts file, etc. have been done as per the Oracle Grid Infrastructure Installation Guide. When installing the Oracle Grid Infrastructure software a number of choices are available, as shown in Figure 2. 11
Figure 3. Grid Infrastructure Installation Options The voting disks and OCR files will sit in their own default diskgroup. From the installation screen, select Option 1, Install and Configure Oracle Grid Infrastructure for a Cluster. Carry on the installation as usual until the Create ASM Disk Group screen. Figure 4. Creating an ASM Disk Group 12
Next, selet the 3x1G LUNs you setup for the voting and OCR files. In this example, we used /dev/oracleasm/disks/* as the discovery string and the diskgroup is named SYSTEMDG with the LUNs stamped as SYSTEMDG1-3. You can use the ORCL:* devices instead if you wish. Continue the installation procedure and run root.sh as normal. 6.3 Creating ASM Disk Groups One of the benefits of the ASMLib kernel driver is the reduction in file descriptors required by a database when accessing ASM disks using the driver as an access method. For this benefit to be realised, the ASM instance parameter ASM_DISKSTRING must be set to ORCL:* or left blank (which has the same meaning). Although ASMLib creates device files in the location /dev/oracleasm/disks the action of setting the ASM_DISKSTRING parameter to this location causes the file descriptor benefit to be lost. Using /dev/oracleasm/disks each database shadow process must have a file descriptor open to every ASM disk that it accesses, whilst using the driver results in only one file descriptor per shadow process. However, experience has shown that it is not always possible to configure the ASMLib driver to work correctly, so the /dev/oracleasm/disks alternative is a valid workaround. Of course it may also be the case that the decision has been made not to use ASMLib at all, in which case the ASM_DISKSTRING would need to contain the path to the /dev/mapper devices representing the VIOLIN MEMORY LUNs. The kfod tool can be used to check whether the driver is working correctly. 1. Check ASMLIB driver $ kfod cluster=false asm_diskstring='orcl:*' Same output: -------------------------------------------------------------------------------- Disk Size Path User Group ================================================================================ 1: 102400 Mb ORCL:DATA1 <unknown> <unknown> 2: 102400 Mb ORCL:DATA2 <unknown> <unknown> 3: 102400 Mb ORCL:DATA3 <unknown> <unknown> 4: 102400 Mb ORCL:DATA4 <unknown> <unknown> 5: 105472 Mb ORCL:DATA5 <unknown> <unknown> 6: 105472 Mb ORCL:DATA6 <unknown> <unknown> 7: 105472 Mb ORCL:DATA7 <unknown> <unknown> 8: 105472 Mb ORCL:DATA8 <unknown> <unknown> 9: 102400 Mb ORCL:RECO <unknown> <unknown> -------------------------------------------------------------------------------- ORACLE_SID ORACLE_HOME ================================================================================ +ASM /u01/app/11.2.0/grid 2. If no disks are visible here, then the workaround of using /dev/oracleasm/disks can also be tested with kfod 13
[oracle@oel57 ~]$ kfod cluster=false asm_diskstring='/dev/oracleasm/disks' sample output: -------------------------------------------------------------------------------- Disk Size Path User Group ================================================================================ 1: 102400 Mb /dev/oracleasm/disks/data1 oracle dba 2: 102400 Mb /dev/oracleasm/disks/data2 oracle dba 3: 102400 Mb /dev/oracleasm/disks/data3 oracle dba 4: 102400 Mb /dev/oracleasm/disks/data4 oracle dba 5: 105472 Mb /dev/oracleasm/disks/data5 oracle dba 6: 105472 Mb /dev/oracleasm/disks/data6 oracle dba 7: 105472 Mb /dev/oracleasm/disks/data7 oracle dba 8: 105472 Mb /dev/oracleasm/disks/data8 oracle dba 9: 102400 Mb /dev/oracleasm/disks/reco oracle dba -------------------------------------------------------------------------------- ORACLE_SID ORACLE_HOME ================================================================================ +ASM /u01/app/11.2.0/grid In this situation, the ASM_DISKSTRING parameter was set to /dev/oracleasm/disks which is what has been used for this example. The successful discovery of disks using kfod indicates that the ASM_DISKSTRING used on the kfod command line is correct, so this should now be set in ASM if necessary and the v$asm_disks view queried: 1. Check disks appear in ASM: $ sqlplus / as sysasm > select disk_number, header_status, state, label, path from v$asm_disk order by disk_number; Sample output: DISK_NUMBER HEADER_STATUS STATE LABEL PATH ----------- ------------- -------- ------ ---------- 0 PROVISIONED NORMAL DATA1 ORCL:DATA1 1 PROVISIONED NORMAL DATA2 ORCL:DATA2 2 PROVISIONED NORMAL DATA3 ORCL:DATA3 3 PROVISIONED NORMAL DATA4 ORCL:DATA4 4 PROVISIONED NORMAL DATA5 ORCL:DATA5 14
5 PROVISIONED NORMAL DATA6 ORCL:DATA6 6 PROVISIONED NORMAL DATA7 ORCL:DATA7 7 PROVISIONED NORMAL DATA8 ORCL:DATA8 8 PROVISIONED NORMAL RECO ORCL:RECO 2. Create diskgroups CREATE DISKGROUP DISKGROUPNAME EXTERNAL REDUNDANCY DISK 'DISK1','DISK2' ATTRIBUTE 'au_size'='xm', 'compatible.asm' = '11.2', 'compatible.rdbms' = '11.2'; For example with discovery string of ORCL:*: CREATE DISKGROUP DATA EXTERNAL REDUNDANCY DISK 'ORCL:DATA1','ORCL:DATA2' ATTRIBUTE 'au_size'='xm', 'compatible.asm' = '11.2', 'compatible.rdbms' = '11.2'; For example with discovery string of /dev/oracleasm/disks: CREATE DISKGROUP DATA EXTERNAL REDUNDANCY DISK '/dev/oracleasm/disks/disk1', '/dev/oracleasm/disks/disk2' ATTRIBUTE 'au_size'='64m', 'compatible.asm' = '11.2', 'compatible.rdbms' = '11.2'; Note: au_size should be set for your environment 7 Configuring the Oracle Database The database instances using Violin Memory need specific configuration in order to achieve optimum performance. There are two elements to consider when configuring the database to run on 4k sector storage: Database block size (db_block_size) In order to ensure 4k alignment and optimal performance, values of 4k or greater should always be used with Violin Memory Online redo log block size by default this is 512 bytes but with Oracle 11g Release 2 it can be changed using the BLOCKSIZE clause. To achieve optimal performance on Violin Memory this should be set to 4k. 7.1 Installing the Oracle Software Package With the installation of Grid Infrastructure when installing the database software a number of choices are available, as shown in Figure 3. 15
Figure 3. Oracle Installation Options During the installation process, it is not possible to create a database, which will use 4k block sizes for the online redo logs, nor is it possible to set the _disk_sector_size_override parameter. It is therefore necessary to choose the Install database software only option and then build any databases post-installation. For the configuration described in this report, all of the default choices were accepted when installing the database software. 7.2 Option 2: Creating a Database Using the Manual Scripts Once the software has been installed use the Database Configuration Assistant (DBCA) to create scripts which can then be modified to create a database: 1. Set OS environment $ unset ORACLE_SID LD_LIBRARY_PATH $ export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 $ export PATH=$ORACLE_HOME/bin:$PATH $ dbca 2. In the DBCA wizard GUI, set the database file location as shown in Figure 4. 16
Figure 4. Selecting Database File Locations in Oracle DBCA 3. In the Creation Option screen, select Generate Database Creation Scripts as shown in Figure 5. Figure 5. Enabling Database Creation Scripts in Oracle DBCA Clicking on the Finish button completes the process and generates the scripts in the location $ORACLE_BASE/admin/$ORACLE_SID/scripts. 7.3 Editing the DBCA Creation Scripts After creating the scripts, access and edit the scripts as follows to complete installation: 1. Change the directory path as follows: 17
$ cd /u01/app/oracle/admin/vmem1/scripts 2. Edit the scripts as indicated in the following sample output: -rw-r----- 1 oracle oinstall 2091 Mar 22 14:54 clonedbcreation.sql* -rw-r----- 1 oracle oinstall 828 Mar 22 14:54 CloneRmanRestore.sql -rw-r----- 1 oracle oinstall 2141 Mar 22 14:54 init.ora** -rw-r----- 1 oracle oinstall 2235 Mar 22 14:54 initvmem1tempomf.ora** -rw-r----- 1 oracle oinstall 2177 Mar 22 14:54 initvmem1temp.ora** -rw-r----- 1 oracle oinstall 508 Mar 22 14:54 lockaccount.sql -rw-r----- 1 oracle oinstall 1337 Mar 22 14:54 postdbcreation.sql*** -rw-r----- 1 oracle oinstall 668 Mar 22 14:54 postscripts.sql -rw-r----- 1 oracle oinstall 1457 Mar 22 14:54 rmanrestoredatafiles.sql -rw-r----- 1 oracle oinstall 9748480 Mar 22 14:54 tempcontrol.ctl -rwxr-xr-x 1 oracle oinstall 519 Mar 22 14:54 vmem1.sh -rwxr-xr-x 1 oracle oinstall 1155 Mar 22 14:54 vmem1.sql*** * Add the _disk_sector_size_override parameter to the parameter files (shown in red) ** Add the BLOCKSIZE 4k clause to the online redo log creation commands (shown in blue) 3. Edit init.ora to add _disk_sector_size_override=true parameter: $ for file in `ls -1 *.ora`; do > echo *._disk_sector_size_override=true >> $file > done; 4. Edit clonedbcreation.sql to change redo log blocksize to 4k as shown in the example below: Create controlfile reuse set database "vmem1" MAXINSTANCES 8 MAXLOGHISTORY 1 MAXLOGFILES 16 MAXLOGMEMBERS 3 MAXDATAFILES 100 Datafile '&&file0', '&&file1', '&&file2', '&&file3' LOGFILE GROUP 1 SIZE 300M BLOCKSIZE 4k, GROUP 2 SIZE 300M BLOCKSIZE 4k, 18
GROUP 3 SIZE 300M BLOCKSIZE 4k, GROUP 4 SIZE 300M BLOCKSIZE 4k, GROUP 5 SIZE 300M BLOCKSIZE 4k, GROUP 6 SIZE 300M BLOCKSIZE 4k, GROUP 7 SIZE 300M BLOCKSIZE 4k, GROUP 8 SIZE 300M BLOCKSIZE 4k RESETLOGS; 5. Execute the scripts as shown below: $ unset ORACLE_SID LD_LIBRARY_PATH $ export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 $ export PATH=$ORACLE_HOME/bin:$PATH $./vmem1.sh 6. Confirm the redo logs were created with 4k block size by logging into the database and selecting the group number, bytes, block size, and members from v$log as shown in the sample output in Table : Table 1. Confirming redo logs Group Bytes Block Size Members 1 209715200 4096 1 2 209715200 4096 1 3 52428800 4096 1 4 52428800 4096 1 19
About Violin Memory Violin Memory is pioneering a new class of high-performance flash-based storage systems that are designed to bring storage performance in-line with high-speed applications, servers and networks. Violin Flash Memory Arrays are specifically designed at each level of the system architecture starting with memory and optimized through the array to leverage the inherent capabilities of flash memory and meet the sustained highperformance requirements of business critical applications, virtualized environments and Big Data solutions in enterprise data centers. Specifically designed for sustained performance with high reliability, Violin s Flash Memory Arrays can scale to hundreds of terabytes and millions of IOPS with low, predictable latency. Founded in 2005, Violin Memory is headquartered in Mountain View, California. For more information about Violin Memory products, visit. 2013 Violin Memory. All rights reserved. All other trademarks and copyrights are property of their respective owners. Information provided in this paper may be subject to change. For more information, visit. vmem-13q2-tr-oracle-rac-bestpractices-r1-uslet-en