What s New in 12c High Availability. Aman Sharma



Similar documents
Why and How You Should Be Using Policy-Managed RAC Databases

An Oracle White Paper June Oracle Real Application Clusters (RAC)

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated R2

Oracle Database 11g: RAC Administration Release 2

About the Author About the Technical Contributors About the Technical Reviewers Acknowledgments. How to Use This Book

An Oracle White Paper January A Technical Overview of New Features for Automatic Storage Management in Oracle Database 12c

Oracle Failover Database Cluster with Grid Infrastructure 12c Release 1

High Availability Infrastructure of Database Cloud: Architecture, Best Practices. Kai Yu Oracle Solutions Engineering, Dell Inc.

Advanced Oracle DBA Course Details

Oracle Database 11g: New Features for Administrators DBA Release 2

Oracle Networking and High Availability Options (with Linux on System z) & Red Hat/SUSE Oracle Update

ORACLE DATABASE 10G ENTERPRISE EDITION

managing planned downtime with RAC Björn Rost

Oracle 11g New Features - OCP Upgrade Exam

High Availability Infrastructure for Cloud Computing

Oracle Cloud Storage and File system

What s New with Oracle Database 12c on Windows On-Premises and in the Cloud

How To Test For A Test On A Test Server

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

An Oracle White Paper June Oracle Real Application Clusters One Node

Exadata for Oracle DBAs. Longtime Oracle DBA

Architecture and Mode of Operation

Objectif. Participant. Prérequis. Pédagogie. Oracle Database 11g - New Features for Administrators Release 2. 5 Jours [35 Heures]

An Oracle White Paper November Oracle Real Application Clusters (RAC) 11g Release 2

<Insert Picture Here> Oracle In-Memory Database Cache Overview

Real Application Testing. Fred Louis Oracle Enterprise Architect

ORACLE DATABASE HIGH AVAILABILITY STRATEGY, ARCHITECTURE AND SOLUTIONS

Highly Available NFS over Oracle ASM Cluster File System (ACFS)

The Power of FAN. UKOUG 1 December 2008 Bernhard De Cock Buning

Why Standardize on Oracle Database 11g Next Generation Database Management. Thomas Kyte

Ultimate Guide to Oracle Storage

<Insert Picture Here> Managing Storage in Private Clouds with Oracle Cloud File System OOW 2011 presentation

Architecture and Mode of Operation

Oracle Storage Options

SCAN, VIP, HAIP etc. Introduction This paper is to explore few RAC abbreviations and explain the concepts behind these acronyms.

OBIEE 11g Scaleout & Clustering

Oracle Exam 1z0-599 Oracle WebLogic Server 12c Essentials Version: 6.4 [ Total Questions: 91 ]

Resource Manager Overview. Sue Lee, Director of Development 1 Copyright 2013, Oracle and/or its affiliates. All rights reserved.

WebSphere XD Virtual Enterprise v7.0: virtualization and infrastructure optimization

1 Certification Information

Learn Oracle WebLogic Server 12c Administration For Middleware Administrators

CASE STUDY: Oracle TimesTen In-Memory Database and Shared Disk HA Implementation at Instance level. -ORACLE TIMESTEN 11gR1

Real-time Data Replication

Oracle WebLogic Foundation of Oracle Fusion Middleware. Lawrence Manickam Toyork Systems Inc

Oracle Recovery Manager

Maximizing Oracle RAC Uptime

An Oracle White Paper July Oracle ACFS

Preview of Oracle Database 12c In-Memory Option. Copyright 2013, Oracle and/or its affiliates. All rights reserved.

High Availability Databases based on Oracle 10g RAC on Linux

Oracle Databases on VMware High Availability

Introduction to Database as a Service

<Insert Picture Here> Oracle VM and Cloud Computing

Building Oracle Grid with Oracle VM on Blade Servers and iscsi Storage. Kai Yu Dell Oracle Solutions Engineering

Maximize Availability With Oracle Database 12c

Using Apache Derby in the real world

Oracle Database 10g: Backup and Recovery 1-2

Oracle Enterprise Manager 12c New Capabilities for the DBA. Charlie Garry, Director, Product Management Oracle Server Technologies

Extending Hadoop beyond MapReduce

SanDisk ION Accelerator High Availability

Building Reliable, Scalable AR System Solutions. High-Availability. White Paper

Oracle Database Resident Connection Pooling. Database Resident Connection Pooling (DRCP) Oracle Database 11g. Technical White paper

Virtualized Oracle 11g/R2 RAC Database on Oracle VM: Methods/Tips Kai Yu Oracle Solutions Engineering Dell Inc

Oracle Database Solutions on VMware High Availability. Business Continuance of SAP Solutions on Vmware vsphere

Oracle TimesTen IMDB - An Introduction

Securing Data in Oracle Database 12c

1 Certification Information

<Insert Picture Here> Enabling Cloud Deployments with Oracle Virtualization

CON9488 The Enterprise Cloud Simplified with Oracle VM

Oracle Database Cloud Service Rick Greenwald, Director, Product Management, Database Cloud

RMAN BACKUP & RECOVERY. Recovery Manager. Veeratteshwaran Sridhar

HP OO 10.X - SiteScope Monitoring Templates

SQL Databases Course. by Applied Technology Research Center. This course provides training for MySQL, Oracle, SQL Server and PostgreSQL databases.

Contingency Planning and Disaster Recovery

OTM Performance OTM Users Conference Jim Mooney Vice President, Product Development August 11, 2015

An Oracle White Paper June Oracle Single Client Access Name (SCAN)

Jive and High-Availability

SOLUTION BRIEF. Advanced ODBC and JDBC Access to Salesforce Data.

Oracle Database: Program with PL/SQL

CHAPTER 1 - JAVA EE OVERVIEW FOR ADMINISTRATORS

Exploring Oracle E-Business Suite Load Balancing Options. Venkat Perumal IT Convergence

Active-Active and High Availability

Oracle Cloud Provisioning with IBM Wave and Oracle 12c Cloud Control on IBM z Systems

Mission-critical HP-UX 11i v2 WebSphere Reference Architecture White Paper

<Insert Picture Here> Oracle Cloud Storage. Morana Kobal Butković Principal Sales Consultant Oracle Hrvatska

Oracle vs. SQL Server. Simon Pane & Steve Recsky First4 Database Partners Inc. September 20, 2012

Module 14: Scalability and High Availability

Oracle Database Cloud

Integrate Master Data with Big Data using Oracle Table Access for Hadoop

Disaster Recovery for Oracle Database

ORACLE DATABASE: ADMINISTRATION WORKSHOP I


1 Copyright 2011, Oracle and/or its affiliates. All rights reserved. Insert Information Protection Policy Classification from Slide 7

Oracle Database 12c - Global Data Services

F5 and Oracle Database Solution Guide. Solutions to optimize the network for database operations, replication, scalability, and security

ITG Software Engineering

Oracle Data Integrator 11g New Features & OBIEE Integration. Presented by: Arun K. Chaturvedi Business Intelligence Consultant/Architect

Techniques for implementing & running robust and reliable DB-centric Grid Applications

StreamServe Persuasion SP5 StreamStudio

Chapter 1 - Web Server Management and Cluster Topology

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V

Transcription:

What s New in 12c High Availability Aman Sharma @amansharma81 http://blog.aristadba.com

Who Am I? Aman Sharma About 12+ years using Oracle Database Oracle ACE Frequent Contributor to OTN Database forum(aman.) Oracle Certified Sun Certified @amansharma81 * http://blog.aristadba.com * Sangam14-2014 2

Agenda Sangam14-2014 3

(Actual)Agenda Flex Cluster Flex Cluster- Server Pool Enhancements Multitenant database with 12c RAC Bundled Agents(XAG) What-If command Transaction Guard Application Continuity Flex ASM Sangam14-2014 4

Pre 12c Oracle RAC-Database Tier Software based clustering using Grid Infrastructure software Cluster nodes contain only database and ASM instances Homogenous configuration Dedicated access to the shared storage for the cluster nodes Applications/users connect via nodes outsides the cluster Reflects Point-to-Point model Database Tier Sangam14-2014 5

Pre 12c Oracle RAC-Application Tier Application Tier Database Tier Sangam14-2014 6

Pre-12.1 Cluster vs 12c Flex Cluster Sangam14-2014 7

Oracle RAC Using Point-to-Point System Requires a lot of resources Each node is connected to each other via interconnect for node-node heartbeat Each node is connected to the storage directly Possible Interconnect paths for N node cluster N*(N-1)/2 Interconnect Paths for Node Heartbeat N connection paths for storage For 16 Node RAC Heartbeat paths: 16(16-1)/2=120 Storage paths:16 Sangam14-2014 8

Let s Talk Big! Recap: N*(N-1)/2 Node Heartbeat paths N Storage paths For 16 Node RAC 120 Interconnects, 16 storage paths What about 500 node cluster? 124,750 Heartbeat connections 500 Storage Paths Sangam14-2014 9

Introducing 12c Flex Cluster Oracle CW Oracle CW Oracle CW Oracle CW Application Tier Leaf Node1 Leaf Node1 Leaf Node2 Leaf Node4 ORCL1 +ASM1 ORCL2 +ASM2 ORCL3 ORCL4 +ASM3 Hub Node1 Oracle CW Hub Node2 GNS Hub Node3 Hub Node4 Oracle CW Oracle CW Oracle CW Database Tier Flex ASM Sangam14-2014 10

12c Flex Clusters-Overview Based on Hub-Spoke topology Two different categories of cluster nodes Hub Nodes Runs database and ASM instances Leaf Nodes Loosely coupled Runs applications Connects to a Hub node Flex ASM Required for Flex Cluster Hub nodes connect to Flex ASM based storage Sangam14-2014 11

11.2 RAC vs 12c Flex Cluster 16 Node cluster 120 Interconnects 16 Storage paths 500 node cluster? 124,750 Interconnects 500 Storage Paths 5 Hub Nodes 5 Hub, 16 Leaf 8 Interconnects 5 Storage paths 21 Hub-Leaf node connections 500 node cluster? 25 Hub,475 Leaf 300 Interconnects 25 Storage paths 775 Storage Paths Sangam14-2014 12

Flex Cluster Benefits Much lesser resource requirements Much larger scalability. Number of nodes can be now up to 2000 More High Availability for the application tier Previously, application HA was dependent on the application code Application nodes also can now be able to use Server Pools Better management of dependency mapping for applications Sangam14-2014 13

Say Hello to Leaf & Hub Nodes 1 Leaf Node 1 Hub node Leaf Nodes don t talk to each other(neither needs to) Leaf node(s) choose the Hub nodes when they join the cluster Applications running on Leaf nodes connect to the database using the Hub nodes Must less internode interactions are required(hub-spoke model) Sangam14-2014 14

Leaf Nodes-Closer Look Light weight Oracle CW Loosely coupled Works as Spoke Leaf Node1 Each Leaf node gets connected to a Hub node Heartbeat only to the Hub node Required to run applications and clients over them No direct access to the storage managed by Flex ASM(it s accessibly only to Hub nodes) Oracle CW Leaf Node2 Sangam14-2014 15

Leaf Nodes-Closer Look(contd.) Requires GNS to discover the Hub nodes No private inter-connect between the leaf nodes i.e. no inter-leaf node communication Uses the same Public and Private networks as are used by the Hub nodes If a Hub nodes goes down, connected Leaf node(s) get evicted Evicted Leaf node can be added back by restarting the Clusterware on it Sangam14-2014 16

Leaf Nodes-Resource Requirements Very less compared to the Hub nodes Contains only the application specific workload Do not contain Database instances ASM instances VIP s Can be either virtual or physical Contains no Voting Disk or OCR Can be converted into Hub nodes if they have access to the storage Sangam14-2014 17

Grid Naming Service(GNS) & Flex Cluster For enabling Flex Cluster mode, GNS is mandatory GNS runs on one of the Hub nodes Leaf Nodes use GNS as naming service to locate the Hub nodes Applications, services running on Leaf nodes will be requiring GNS to locate the resources that they need in order to function Leaf nodes use GNS only at the time when they join the cluster for the 1 st time Alike 11.2, GNS requires a static IP (GNSVIP) Sangam14-2014 18

12c-Shared GNS Configuration In the previous versions, only one GNS/cluster was allowed For multiple clusters, multiple GNS VIP s were required Causes more resource requirements In 12c, GNS configuration can be shared among clusters GNS configuration needs to be exported before being shared with the clusters $ srvctl export gns -clientdata /tmp/gnsconfig Use the option USE SHARED GNS when doing the next cluster installation Sangam14-2014 19

So What Are Hub Nodes? Just the same as what the cluster nodes were in pre-12c clusters Have access to the ASM managed storage Runs database instances, ASM(Flex) instances and resources for the applications Maximum number of Hub nodes can be 64 in 12.1(HUBSIZE) Sangam14-2014 20

Enabling Flex Cluster Mode To convert a Standard Cluster: Check the current cluster mode $crsctl get cluster mode status Check GNS is enabled or not #srvctl status gns If GNS is not added, add it #srvctl add gns vip 192.168.10.12 domain cluster01.example.com Set Flex Cluster mode #crsctl set cluster mode flex Stop & start clusterware on each node #crsctl stop crs #crsctl start crs Note: Flex clusters can t be converted back to Standard cluster Sangam14-2014 21

Flex Cluster Administration-Example Commands Show the current role of the node $ crsctl get node role status node rac01 Node rac01 active role is hub Change the node role $ crsctl set node role node rac01 leaf Requires a CRS restart on the node Checking the maximum number of Hub nodes allowed(hubsize) $ crsctl get cluster hubsize Sangam14-2014 22

Agenda Flex Cluster Flex Cluster- Server Pool Enhancements Multitenant database with 12c RAC Bundled Agents(XAG) What-If command Transaction Guard Application Continuity Flex ASM Sangam14-2014 23

Server Pools- Recap Feature starting from 11.2 Offers the traditional facility of logical division of cluster Nodes are allocated to the pools Resources are hosted over the pools Resource can be an application, a database, a process Policy-managed interface Resource allocation is based on priority Sangam14-2014 24

Hub & Leaf Node Server Pools Server Pools are now available for both Hub and Leaf nodes Provide better resource management by isolating workloads Leaf Nodes and Hub can never be in the same server pool Server pool management for Leaf nodes is independent from server pools containing Hub Nodes Sangam14-2014 25

Flex Cluster Server Pool Enhancements Leaf Apache Siebel Hub OLTP_SP MIN_SIZE=1,Max_SIZE=3 IMP=3 DSS_SP MIN_SIZE=2,Max_SIZE=2 IMP=2 Sangam14-2014 26

Flex Cluster Policy Based Cluster Administration Enhances the concept of Server Pools introduced in 11.2 Previously, only server pool attributes would determine node placement in server pools From 12c Flex clusters, two new concepts Server Categorization Extended node attributes for servers to decide the allocation in server pools Cluster Configuration Policy Sets Workload based management of servers in the server pools Sangam14-2014 27

Flex Cluster Server Categorization OLTP_SP SERVER_CATEGORY Server Configuration Attributes ACTIVE_CSS_ROLE: HUB LEAF CONFIGURED_CSS_ROLE: HUB LEAF CPU_CLOCK_RATE: MHz CPU_COUNT CPU_EQUIVALENCY CPU_HYPERTHREADING MEMORY_SIZE NAME RESOURCE_USE_ENABLED:1 0 SERVER_LABEL Server Category Attributes NAME ACTIVE_CSS_ROLE:HUB LEAF EXPRESSION: :=: equal eqi: equal, case insensitive >: greater than <: less than!=: not equal co: contains coi: contains, case insensitive st: starts with en: ends with nc: does not contain nci: does not contain, case insensitive Sangam14-2014 28

Flex Cluster Server Categorization in Action [root@rac0 ~]# crsctl status server rac0 -f NAME=rac0 MEMORY_SIZE=1997 CPU_COUNT=1 CPU_CLOCK_RATE=3 CPU_HYPERTHREADING=0 CPU_EQUIVALENCY=1000 DEPLOYMENT=other CONFIGURED_CSS_ROLE=hub RESOURCE_USE_ENABLED=1 SERVER_LABEL= PHYSICAL_HOSTNAME= STATE=ONLINE ACTIVE_POOLS=Generic ora.orcl STATE_DETAILS=AUTOSTARTING RESOURCES ACTIVE_CSS_ROLE=hub [root@rac0 ~]# crsctl status server rac3 -f NAME=rac3 MEMORY_SIZE=1997 CPU_COUNT=1 CPU_CLOCK_RATE=3 CPU_HYPERTHREADING=0 CPU_EQUIVALENCY=1000 DEPLOYMENT=other CONFIGURED_CSS_ROLE=leaf RESOURCE_USE_ENABLED=1 SERVER_LABEL= PHYSICAL_HOSTNAME= STATE=ONLINE ACTIVE_POOLS=Free STATE_DETAILS=AUTOSTART QUEUED ACTIVE_CSS_ROLE=leaf Sangam14-2014 29

Flex Cluster Listing Server Categories [root@rac0 ~]# crsctl status category NAME=ora.hub.category ACL=owner:root:rwx,pgrp:root:r-x,other::r-- ACTIVE_CSS_ROLE=hub EXPRESSION= NAME=ora.leaf.category ACL=owner:root:rwx,pgrp:root:r-x,other::r-- ACTIVE_CSS_ROLE=leaf EXPRESSION= [root@rac0 ~]# crsctl status server -category ora.hub.category NAME=rac0 STATE=ONLINE NAME=rac1 STATE=ONLINE NAME=rac2 STATE=ONLINE Sangam14-2014 30

Flex Cluster Creating Server Category [root@rac0 ~]# crsctl add category testcat -attr "EXPRESSION='(MEMORY > 1900)' [root@rac0 ~]# crsctl status server -category ora.leaf.category NAME=rac3 STATE=ONLINE[root@rac0 ~]# crsctl status category testcat NAME=testcat ACL=owner:root:rwx,pgrp:root:r-x,other::r-- ACTIVE_CSS_ROLE=hub EXPRESSION=( MEMORY > 1900 ) Sangam14-2014 31

Flex Cluster Cluster Policy Set Policy based server pool assignment Default policy-current Managed by a Policy set Policy set contains 2 attributes SERVER_POOL_NAMES LAST_ACTIVATED_POLICY Policy set may contain 0 or more than one policies Each Policy contain definitions for one server pool only Sangam14-2014 32

POOL1 POOL2 POOL3 MIN_SIZE=2,Max_SIZE=2 IMP=0 MIN_SIZE=1,Max_SIZE=1 IMP=0 MIN_SIZE=1,Max_SIZE=1 IMP=0 app1 app2 app3 4 Node Cluster Sangam14-2014 33

Varying Times & Varying Workloads Day Time: app1 uses two servers app2 and app3 use one server, each Night Time: app1 uses one server app2 uses two servers app3 uses one server Node allocation should be done depending on the requirements at different timings Weekend: app1 is not running (0 servers) app2 uses one server app3 uses three servers Sangam14-2014 34

Flex Cluster Proposed Cluster Policy Set SERVER_POOL_NAMES=Free pool1 pool2 pool3 POLICY NAME=DayTime SERVERPOOL NAME=pool1 IMPORTANCE=0 MAX_SIZE=2 MIN_SIZE=2 SERVER_CATEGORY= SERVERPOOL NAME=pool2 IMPORTANCE=0 MAX_SIZE=1 MIN_SIZE=1 SERVER_CATEGORY= SERVERPOOL NAME=pool3 IMPORTANCE=0 MAX_SIZE=1 MIN_SIZE=1 SERVER_CATEGORY= POLICY NAME=NightTime SERVERPOOL NAME=pool1 IMPORTANCE=0 MAX_SIZE=1 MIN_SIZE=1 SERVER_CATEGORY= SERVERPOOL NAME=pool2 IMPORTANCE=0 MAX_SIZE=2 MIN_SIZE=2 SERVER_CATEGORY= SERVERPOOL NAME=pool3 IMPORTANCE=0 MAX_SIZE=1 MIN_SIZE=1 SERVER_CATEGORY= POLICY NAME=Weekend SERVERPOOL NAME=pool1 IMPORTANCE=0 MAX_SIZE=0 MIN_SIZE=0 SERVER_CATEGORY= SERVERPOOL NAME=pool2 IMPORTANCE=0 MAX_SIZE=1 MIN_SIZE=1 SERVER_CATEGORY= SERVERPOOL NAME=pool3 IMPORTANCE=0 MAX_SIZE=3 MIN_SIZE=3 SERVER_CATEGORY= Sangam14-2014 35

Flex Cluster Cluster Policy Set Creation Modify the Default policy set to manage the three server pools: $ crsctl modify policyset attr "SERVER_POOL_NAMES=Free pool1 pool2 pool3" Add the required three policies: $ crsctl add policy DayTime $ crsctl add policy NightTime $ crsctl add policy Weekend Modify the server pools: $ crsctl modify serverpool pool1 -attr "MIN_SIZE=2,MAX_SIZE=2" -policy DayTime $ crsctl modify serverpool pool1 -attr "MIN_SIZE=1,MAX_SIZE=1" -policy NightTime $ crsctl modify serverpool pool1 -attr "MIN_SIZE=0,MAX_SIZE=0" -policy Weekend $ crsctl modify serverpool pool2 -attr "MIN_SIZE=1,MAX_SIZE=1" -policy DayTime $ crsctl modify serverpool pool2 -attr "MIN_SIZE=2,MAX_SIZE=2" -policy NightTime $ crsctl modify serverpool pool2 -attr "MIN_SIZE=1,MAX_SIZE=1" -policy Weekend $ crsctl modify serverpool pool3 -attr "MIN_SIZE=1,MAX_SIZE=1" -policy DayTime $ crsctl modify serverpool pool3 -attr "MIN_SIZE=1,MAX_SIZE=1" -policy NightTime $ crsctl modify serverpool pool3 -attr "MIN_SIZE=3,MAX_SIZE=3" -policy Weekend Sangam14-2014 36

Flex Cluster Cluster Policy Set Creation Activate the policy-weekend $ crsctl modify policyset -attr "LAST_ACTIVATED_POLICY=Weekend Server allocations after the policy being applied $ crsctl status resource -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- app1 1 ONLINE OFFLINE STABLE 2 ONLINE OFFLINE STABLE app2 1 ONLINE ONLINE mjk_has3_1 STABLE app3 1 ONLINE ONLINE mjk_has3_0 STABLE 2 ONLINE ONLINE mjk_has3_2 STABLE 3 ONLINE ONLINE mjk_has3_3 STABLE -------------------------------------------------------------------------------- Sangam14-2014 37

Agenda Flex Cluster Flex Cluster- Server Pool Enhancements Multitenant database with 12c RAC Bundled Agents(XAG) What-If command Transaction Guard Application Continuity Flex ASM Sangam14-2014 38

12c Multitenant Database & 12c RAC Multitenant Databases contain Containers and Pluggables Supported with 12c RAC Each PDB is going to be running as a service Each PDB service can run on one or more RAC instances Each PDB service can be deployed over server pool(s) Sangam14-2014 39

Agenda Flex Cluster Flex Cluster- Server Pool Enhancements Multitenant database with 12c RAC Bundled Agents(XAG) What-If command Transaction Guard Application Continuity Flex ASM Sangam14-2014 40

Flex Cluster Bundled Agents(XAG) Oracle CW Oracle CW Oracle CW Oracle CW Application Tier XAG Ag XAG XAG XAG Leaf Node1 Leaf Node1 Leaf Node2 Leaf Node4 ORCL1 +ASM1 ORCL2 +ASM2 ORCL3 ORCL4 +ASM3 Hub Node1 Oracle CW Hub Node2 GNS Hub Node3 Hub Node4 Oracle CW Oracle CW Oracle CW Database Tier Flex ASM Sangam14-2014 41

Flex Cluster Bundled Agents(XAG) Introduction Oracle CW can be used to provide HA to applications HA for applications was available earlier through the applications API s and Services With 11.2.0.3, agents were available as standalone (http://oracle.com/goto/clusterware) 12.1 introduced Bundled Agents(XAG)- supplied with the GI software itself In 12c, XAG agents can reside on both Leaf and Hub nodes http://www.oracle.com/technetwork/database/database-technologies/clusterware/downloads/ogiba-2189738.pdf Sangam14-2014 42

GI & Bundled Agents GI provides pre-configured public core network resourceora.net1.network Applications bind Application VIP s(appvip) to this network layer AGCTL-interface to add an application resource to the GI, managed by the bundled agents Shared storage access-acfs/nfs/dbfs Applications for which XAG are available : Apache HTTP & Tomcat Golden Gate Siebel JD Edwards PeopleSoft MySQL Sangam14-2014 43

Agenda Flex Cluster Flex Cluster- Server Pool Enhancements Multitenant database with 12c RAC Bundled Agents(XAG) What-If command Transaction Guard Application Continuity Flex ASM Sangam14-2014 44

12c Cluster- What-If Command DBA s, from 12c, can predict the impact of an operation Can be used with both CRSCTL and SRVCTL commands Available for the following category of events Resources: Start, stop, relocate, add,modify Server pools: Add, remove, and modify Servers: Add, remove, and relocate Policy: Change active policy Server category: Modify Sangam14-2014 45

12c Cluster- What-If Command [root@rac0 ~]# crsctl eval stop res ora.rac0.vip -f Stage Group 1: -------------------------------------------------------------------------------- Stage Number Required Action -------------------------------------------------------------------------------- 1 Y Resource 'ora.listener.lsnr' (rac0) will be in state [OFFLINE] 2 Y Resource 'ora.rac0.vip' (1/1) will be in state [OFFLINE] -------------------------------------------------------------------------------- [root@rac0 ~]# crsctl eval start res ora.rac0.vip -f Stage Group 1: -------------------------------------------------------------------------------- Stage Number Required Action -------------------------------------------------------------------------------- 1 N Error code [223] for entity [ora.rac0.vip]. Message is [CRS-5702: Resource 'ora.rac0.vip' is already running on 'rac0']. -------------------------------------------------------------------------------- Sangam14-2014 46

[root@rac0 ~]# crsctl eval delete server rac0 -f Stage Group 1: -------------------------------------------------------------------------------- Stage Number Required Action -------------------------------------------------------------------------------- 1 Y Resource 'ora.asmnet1lsnr_asm.lsnr' (rac0) will be in state [OFFLINE] Y Resource 'ora.data.dg' (rac0) will be in state [OFFLINE] Y Resource 'ora.listener.lsnr' (rac0) will be in state [OFFLINE] Y Resource 'ora.listener_scan1.lsnr' (1/1) will be in state [OFFLINE] Y Resource 'ora.asm' (1/1) will be in state [OFFLINE] Y Resource 'ora.gns' (1/1) will be in state [OFFLINE] Y Resource 'ora.gns.vip' (1/1) will be in state [OFFLINE] Y Resource 'ora.net1.network' (rac0) will be in state [OFFLINE] Y Resource 'ora.ons' (rac0) will be in state [OFFLINE] Y Resource 'ora.orcl.db' (1/1) will be in state [OFFLINE] Y Resource 'ora.proxy_advm' (rac0) will be in state [OFFLINE] Y Resource 'ora.rac0.vip' (1/1) will be in state [OFFLINE] Y Resource 'ora.scan1.vip' (1/1) will be in state [OFFLINE] Y Server 'rac0' will be removed from pools [Generic ora.orcl] 2 Y Resource 'ora.gns.vip' (1/1) will be in state [ONLINE] on server [rac1] Y Resource 'ora.rac0.vip' (1/1) will be in state [ONLINE INTERMEDIATE] on server [rac1] <<output bridged for abbreviation>> -------------------------------------------------------------------------------- Sangam14-2014 47

Agenda Flex Cluster Flex Cluster- Server Pool Enhancements Multitenant database with 12c RAC Bundled Agents(XAG) What-If command Transaction Guard Application Continuity Flex ASM Cloud File System Sangam14-2014 48

Transaction Issues Before 12c Outage on Database or Application level can cause In-flight work loss 1 5 Indoubt User s reattempt for transaction may lead to logical errors i.e. duplication of data Handling of such exceptions at application level is not easy Application 4 2 Database 3 error error Sangam14-2014 49

Solution: Transaction Guard & Appl. Continuity Transaction Guard Transaction Guard provides a generic protocol and API for applications to use for at-most-once execution in case of planned and unplanned outages and repeated submissions Application Continuity Enables the replay of in-flight, recoverable transactions following the outage of database Sangam14-2014 50

What Is Transaction Guard Part of both Standard & Flex cluster Returns the outcome of the last transaction after a recoverable error using Logical Transaction ID(LTXID) Used by Application Continuity(automatically enabled) Can be used independently also Sangam14-2014 51

What Is Transaction Guard Database Request Unit of work submitted by SQL, PL/SQL etc. Recoverable Error Error due to any issue independent of application i.e. network, node, database, storage errors Reliable Commit Outcome Outcome of the last transaction(preserved by TG using LTXID) Session State Consistency Describes how the application changes the non-transaction state during a database Mutable Functions Functions that change their state with every executions Sangam14-2014 52

What Is Logical TX ID(LTXID)? LTXID=Logical Transaction ID Used to fetch the outcome of the last transaction s commit status DBMS_CONT_APP.GET_LTXID_OUTCOME Client is supplied unique LTXID for each authentication and for each round-trip for client driver for commit operations Both client and database hold LTXID Transaction Guard ensure that each LTXID is unique LTXID is present at the commit for default retention period- 24 hours While obtaining the outcome, LTXID is blocked to ensure it s integrity Sangam14-2014 53

Transaction Guard-Pseudo Workflow Receive a FAN down event (or recoverable error) FAN aborts the dead session If recoverable error (new OCI_ATTRIBUTE for OCI, isrecoverable for JDBC) Get last LTXID from dead session using getltxid or from your callback Obtain a new session Call GET_LTXID_OUTCOME with last LTXID to obtain COMMITTED and USER_CALL_COMPLETED status If COMMITTED and USER_CALL_COMPLETED Then return result ELSEIF COMMITTED and NOT USER_CALL_COMPLETED Then return result with a warning (that details such as out binds or row count were not returned) ELSEIF NOT COMMITTED Cleanup and resubmit request, or return uncommitted result to the client Sangam14-2014 54

Transaction Guard-(Un)Supported Transactions Supported Local Transactions Parallel Transactions Distributed & Remote Transactions DDL & DCL Transactions Auto-commit and commit-on success Pl/SQL with embedded Commit Unsupported Recursive transactions Autonomous transactions Active Data Guard with read/write DB links for forwarding transactions Golden Gate & Logical Standby API supported for 12c JDBC Type 4 12c OCI/OCCI Client drivers 12c ODP.net Sangam14-2014 55

Configuring Database for Transaction Guard Database release 12.1.0.1 or later Grant execute on DBMS_APP_CONT to <user>; Configure Fast Application Notification(FAN) Locate and define Transaction History table(ltxid_trans) Configure following parameters for Service COMMIT_OUTCOME = TRUE FAILOVER_TYPE=TRANSACTION RETENTION_TIMEOUT=<value> Sangam14-2014 56

Sample Service Configuration for Transaction Guard Adding an Admin-managed Service srvctl add service -database orcl -service GOLD -prefer inst1 -available inst2 -commit_outcome TRUE - retention 604800 Modifying a Single Instance Service DECLARE params dbms_service.svc_parameter_array; BEGIN params('commit_outcome'):='true'; params('retention_timeout'):=604800; dbms_service.modify_service('<service-name>',params); END; / Sangam14-2014 57

Agenda Flex Cluster Flex Cluster- Server Pool Enhancements Multitenant database with 12c RAC Bundled Agents(XAG) What-If command Transaction Guard Application Continuity Flex ASM Cloud File System Sangam14-2014 58

What Is Application Continuity Masks the issues for the applications Replays the in-flight transactions Uses Transaction Guard implicitly Sangam14-2014 59

Application Continuity-Workflow Image courtesy-oracle documentation Sangam14-2014 60

Application Continuity-Resource Requirements For Java Client Increase memory for replay queues Additional CPU for garbage collection For Database Server Additional CPU for validation Transaction Guard Bundled with the kernel Minimal overhead Sangam14-2014 61

Disabling Application Continuity Use disablereplay() API Check for UTL_FILE, UTL_MAIL, UTL_FILE_TRANSFER, UTL_HTTP, UTL_TCP, UTL_SMPT, DBMS_ALERT Disable the replay when application Assumes that location value doesn't change Assumes that rowid value doesn't change Uses Autonomous Transactions, External Pl/SQL Sangam14-2014 62

Agenda Flex Cluster Flex Cluster- Server Pool Enhancements Multitenant database with 12c RAC Bundled Agents(XAG) What-If command Transaction Guard Application Continuity Flex ASM Cloud File System Sangam14-2014 63

ASM of Past Times ASM instances run locally on a node ASM clients can access ASM only from the local node Loss of local ASM Instance causes the unavailability of the clients connected to it Image courtesy-oracle documentation Sangam14-2014 64

12c s Flex ASM 1:1 mapping of ASM instance with the clients is not required Number of ASM instances= Cardinality(3) Uses a dedicated network called ASM Network ASM network is used exclusively for communication between ASM instances and clients If local ASM instance fails, client failover to another Hub node running ASM instance Mandatory for 12c Flex Cluster Image courtesy-oracle documentation Sangam14-2014 65

Dedicated ASM Network in 12c Flex ASM Public Network ORCL1 +ASM1 ORCL2 +ASM2 ORCL3 Hub Node1 Oracle CW Hub Node2 Oracle CW GNS Hub Node3 Oracle CW ASM Network CSS Network Storage Network ASM Storage Sangam14-2014 66

Dedicated ASM Network in 12c Flex ASM Sangam14-2014 67

Flex ASM- Failover Sangam14-2014 68

Flex ASM can be managed using ASMCA CRSCTL SQL*PLUS SRVCTL $ asmcmd showclustermode ASM cluster : Flex mode enabled Administering Flex ASM $ srvctl status asm -detail ASM is running on mynoden02,mynoden01 ASM is enabled. $ srvctl config asm ASM instance count: 3 SQL> SELECT instance_name, db_name, status FROM V$ASM_CLIENT; INSTANCE_NAME DB_NAME STATUS --------------- -------- ------------ +ASM1 +ASM CONNECTED orcl1 orcl CONNECTED orcl2 orcl CONNECTED Sangam14-2014 69

12c ASM- Mixed Mode Configuration Pure 12c Mode Cardinality!= Number of Nodes Supports DB instance failover to other ASM instances Supports any DB instance to connect to ASM instance Managed by Cardinality Mixed Mode Flex ASM with Cardinality = Number of Nodes ASM instance on all the nodes Allows 12c DB instances to connect to remote ASM instances Pre-12c DB instances can connect to local ASM instance Standard Mode Standard ASM installation and configuration Can be converted to Flex ASM mode using ASMCA converttoflexasm.sh Sangam14-2014 70

Agenda Flex Cluster Flex Cluster- Server Pool Enhancements Multitenant database with 12c RAC Bundled Agents(XAG) What-If command Transaction Guard Application Continuity Flex ASM Cloud File System Sangam14-2014 71

12c Cloud File System(Cloud FS) Next generation file system 12c Cloud File System integrates: ASM Cluster File System(ACFS) ASM Dynamic Volume Manager(ADVM) Using Cloud FS, applications, database, storage in private clouds Sangam14-2014 72

Overview of Cloud FS in 12c Image courtesy- Google Images Sangam14-2014 73

Cloud FS-Advanced Data Services Support for all types of files Enhanced Snapshots(snap-of-snap) Auditing Encryption Tagging Sangam14-2014 74

Take Away 12c has revolutionized HA stack, yet again Flex cluster and Flex ASM are new paradigms Multitenancy is the solution for database consolidation Using Flex Cluster along with Multitenancy gives you a much better foundation for creating a private cloud Cloud FS is the foundation for next generation storage solution for Oracle s clusters Sangam14-2014 75

Thank You! @amansharma81 http://blog.aristadba.com amansharma@aristadba.com Sangam14-2014 76