Optimizing Oracle BI EE Performance Mark Rittman, Director, Rittman Mead Consulting UKOUG Conference & Exhibition 2008
Who Am I? Oracle BI&W Architecture and Development Specialist Co-Founder of Rittman Mead Consulting Oracle BI&W Project Delivery Specialists 10+ years with Discoverer, OWB etc Oracle ACE Director, ACE of the Year 2005 Writer for OTN and Oracle Magazine Longest-running Oracle blog http://www.rittmanmead.com/blog Ex-Chair of UKOUG BIRT SIG Co-Chair of ODTUG BI&DW SIG Speaker at IOUG and BIWA events
Rittman Mead Consulting Oracle BI&DW Project Specialists providing consulting, training and support Clients in the UK, USA, Europe, Middle-East Voted UKOUG BI Partner of the Year 2008 Exhibitors at UKOUG 2008, Stand 90 Come and see us to discuss your requirements in more detail
Oracle Business Intelligence Enterprise Edition
OBIEE 10gR3 Architecture XSL Oracle BI Publisher Delivery Server Layout Interfaces Data Logic Web Server (IIS, Tomcat, Websphere, iplanet) Oracle BI Presentation Services SOAP Web Services, XML and URL Interface Oracle Interactive Dashboards User Profiling, Security and Session Mngmt Cache Services (Web) & Connection Mngmt SAW Bridge (J2EE/ISAPI) Oracle Answers TCP/IP (SSL) Web Catalog Service XML Framework XML, HTML, XLS, PDF, TXT over HTTP/HTTPS HTML, SOAP over HTTP/HTTPS Web Browser Javascript for Usability & Interactivity External Applications and Portals Oracle Delivers Server Scheduling/Event Services TCP/IP (SSL) Oracle BI Server Logical SQL ODBC/JDBC (Logical Business Model) ODBC over TCP/IP (SSL) Agent Execution Logic Device Adaptive Content Oracle BI Cluster Controller Externalized Authentication LDAP DB Authentication Custom Authenticator Security Services Query Govern. Cache Services Load Balancer Session Management Intelligent Request Generation Logical Request Generation Navigator Multi-Pass / Sub-Request Logic Fragmentation Optimization Aggregate Navigator Optimized Query Rewrites Execution Engine Metadata Interchange System / Perf Monitoring Oracle BI Administration Metadata Management Services Multi-User Development Services Metadata Documentation Services Server Management Services Data Source Adapters ODBC, CLI, OCI, XML, MDX Analytical and Operational Data Sources
OBIEE 10.1.3.x Diagnostic Tools Query Log produced by the BI Server Server Log produced by the BI Server Server Log produced by the BI Presentation Server Usage Tracking files and database entries, plus associated RPD and reports Oracle Grid Control BI Management Pack DAC Console Execution View Oracle Grid Control / DB Control / AS Control diagnostics and monitoring pages Equivalent monitoring pages for other DB / App Servers
Typical Performance Optimization Scenarios 1. "The system seems slow compared to normal" 2. "This report is running slow" 3. The system isn't working 4. You wish to proactively spot performance problems
How Do Other People Address Performance Issues? Performance in the Oracle Database has long been an area of study In early days, checklists and guess and grimace was the usual approach Let s add some more memory Let s turn on caching I think we should add some materialized views You need a bigger server Nowadays, a more scientific and methodical approach is used Have a methodology Use diagnostics where available, add them where not Use response-time tuning Identify the real culprit and focus tuning time on that So it would be good if we could follow some of these thoughts, and at least base tuning approaches on a methodology and diagnostic data
1) "The system seems slow compared to normal" Someone comes to you, and says that OBIEE seems to be running slow. How would you approach this in the database world? Look at OS diagnostics (taskmgr.exe, top, etc) Look at Grid Control / DB Control Run Statspack report Check ASH/AWR Use diagnostic tools to get a feel for the load on the system Most of these run continuously in the background
Diagnosing System Slowdown on OBIEE Systems There are several equivalents to ASH, AWR, Statspack, Grid Control server reports when working with OBIEE The first of these is the Grid Control BI Management Pack Supports Oracle BI EE 10.1.3+ Supports Oracle BI Applications 7.9+ Can manage the infrastructure for the following OBIEE components Oracle BI Server Oracle BI Cluster Controller Oracle BI Presentation Server Oracle BI Scheduler Oracle BI Suite Enterprise Edition Oracle BI DAC Server Hosts running Oracle BI components
OEM Interface with OBIEE and BI Apps Architecture
Oracle BI Server Performance Charts General Performance CPU Usage% Memory Usage (KB) Execute Requests Fetch Requests Total Sessions etc Cache Performance Data Cache Hit Ratio% Data Cache Hit vs. Misses Generic Cache Requests Generic Cache Avg. Hits/sec Generic Cache Util. Ratio% Record and compare configurations Create service tests and define service-level monitoring
Oracle BI Presentation Server Performance Charts General Performance CPU Usage% Memory Usage (KB) Sessions Chart Engine Query Thread Pool Chart Thread Pool
Oracle BI DAC Server Performance Charts Historical ETL Performance Total Tasks Completed Tasks Running Tasks Failed Tasks Queued Tasks Runnable Tasks ETL Runs Views summary of completed runs,with information about the duration, total steps, completed steps, stopped/failed steps and running steps
Usage Tracking Provides a means to log query performance statistics to either a database or a file Database recommended; easier to analyze and less impact on the server 10.1.3.4+ ships with a number of predefined reports for usage tracking Top 10 queries Analysis per day Long-running queries Performance by user Query count per user Query details per user etc Can be combined into a dashboard Can be used as triggers for ibots Alert when query performance falls Diagnose problems before they are
2) "This report is running slow" A user points you to a report that is taking longer than normal to run How would you approach this (tune a transaction) in the database world? Focus on the individual session, transaction Run a trace Review ASH reports Examine wait events Focus on the parts of the transaction most contributing to response time exec dbms_session.set_identifier('sql_test') select sum(sales) from customers, sales s where...
Diagnosing The Reasons for A Report Running Slow in OBIEE Need to focus on the individual session (request) rather than server-wide statistics Two places to look for details of individual request executions 1.Usage Tracking tables 2.NQSQuery.log log file Usage tracking always runs, but only gives half the picture, plus some history Logging doesn t always run (needs to be enabled) but tells us everything, but just for that report
The NQSQUERY.LOG Query Log Logging by default is turned off for users Can be turned on, and set from a level from 1 to 5 Should really only be enable for diagnostics, can slow down queries Five levels of logging are available Log Level Logging Performed 1 Logical SQL, Elapsed Time, Cache Used, Query Status 2 All of 1 plus subject area name, physical SQL, # rows 3 All of 2 plus cache usage, 4 All of 3 plus logical SQL execution plan 5 All of 4 plus intermediate row counts
Interpreting a Level 3 Log File : Part 1 ############################################# -------------------- SQL Request: SET VARIABLE QUERY_SRC_CD='Report';SELECT PRODUCTS_LOCAL.PROD_SUBCATEGORY_DESC saw_0, SALES_BIG_LOCAL.QUANTITY_SOLD saw_1 FROM "Log & Federated Query Test" ORDER BY saw_0 +++Administrator:2c0000:2c0003:----2008/10/02 13:29:24 -------------------- General Query Info: Repository: Star, Subject Area: Log & Federated Query Test, Presentation: Log & Federated Query Test +++Administrator:2c0000:2c0003:----2008/10/02 13:29:24 -------------------- Logical Request (before navigation): RqList PRODUCTS_LOCAL.PROD_SUBCATEGORY_DESC as c1 GB, QUANTITY_SOLD:[DAggr(SALES_BIG_LOCAL.QUANTITY_SOLD by [ PRODUCTS_LOCAL.PROD_ SUBCATEGORY_DESC] )] as c2 GB OrderBy: c1 asc +++Administrator:2c0000:2c0003:----2008/10/02 13:29:24
Interpreting a Level 3 Log File : Part 1 ############################################# -------------------- SQL Request: Query String SET VARIABLE QUERY_SRC_CD='Report';SELECT PRODUCTS_LOCAL.PROD_SUBCATEGORY_DESC saw_0, SALES_BIG_LOCAL.QUANTITY_SOLD saw_1 FROM "Log & Federated Query Test" ORDER BY saw_0 +++Administrator:2c0000:2c0003:----2008/10/02 13:29:24 -------------------- General Query Info: Repository: Star, Subject Area: Log & Federated Query Test, Presentation: Log & Federated Query Test +++Administrator:2c0000:2c0003:----2008/10/02 13:29:24 -------------------- Logical Request (before navigation): RqList PRODUCTS_LOCAL.PROD_SUBCATEGORY_DESC as c1 GB, QUANTITY_SOLD:[DAggr(SALES_BIG_LOCAL.QUANTITY_SOLD by [ PRODUCTS_LOCAL.PROD_ SUBCATEGORY_DESC] )] as c2 GB OrderBy: c1 asc +++Administrator:2c0000:2c0003:----2008/10/02 13:29:24
Interpreting a Level 3 Log File : Part 1 ############################################# -------------------- SQL Request: Query String SET VARIABLE QUERY_SRC_CD='Report';SELECT PRODUCTS_LOCAL.PROD_SUBCATEGORY_DESC saw_0, SALES_BIG_LOCAL.QUANTITY_SOLD saw_1 FROM "Log & Federated Query Test" ORDER BY saw_0 +++Administrator:2c0000:2c0003:----2008/10/02 13:29:24 -------------------- General Query Info: General query Information Repository: Star, Subject Area: Log & Federated Query Test, Presentation: Log & Federated Query Test +++Administrator:2c0000:2c0003:----2008/10/02 13:29:24 -------------------- Logical Request (before navigation): RqList PRODUCTS_LOCAL.PROD_SUBCATEGORY_DESC as c1 GB, QUANTITY_SOLD:[DAggr(SALES_BIG_LOCAL.QUANTITY_SOLD by [ PRODUCTS_LOCAL.PROD_ SUBCATEGORY_DESC] )] as c2 GB OrderBy: c1 asc +++Administrator:2c0000:2c0003:----2008/10/02 13:29:24
Interpreting a Level 3 Log File : Part 1 ############################################# -------------------- SQL Request: Query String SET VARIABLE QUERY_SRC_CD='Report';SELECT PRODUCTS_LOCAL.PROD_SUBCATEGORY_DESC saw_0, SALES_BIG_LOCAL.QUANTITY_SOLD saw_1 FROM "Log & Federated Query Test" ORDER BY saw_0 +++Administrator:2c0000:2c0003:----2008/10/02 13:29:24 -------------------- General Query Info: General query Information Repository: Star, Subject Area: Log & Federated Query Test, Presentation: Log & Federated Query Test +++Administrator:2c0000:2c0003:----2008/10/02 13:29:24 -------------------- Logical Request (before navigation): Logical request (before Navigation) RqList PRODUCTS_LOCAL.PROD_SUBCATEGORY_DESC as c1 GB, QUANTITY_SOLD:[DAggr(SALES_BIG_LOCAL.QUANTITY_SOLD by [ PRODUCTS_LOCAL.PROD_ SUBCATEGORY_DESC] )] as c2 GB OrderBy: c1 asc +++Administrator:2c0000:2c0003:----2008/10/02 13:29:24
Interpreting a Level 3 Log File : Part 2 -------------------- Sending query to database named ora11g (id: <<11172>>): select T6303.PROD_SUBCATEGORY_DESC as c1, sum(t6826.quantity_sold) as c2 from SH_COPY.PRODUCTS T6303, SH_COPY.SALES_BIG T6826 where ( T6303.PROD_ID = T6826.PROD_ID ) group by T6303.PROD_SUBCATEGORY_DESC order by c1
Interpreting a Level 3 Log File : Part 2 -------------------- Sending query to database named ora11g (id: <<11172>>): Physical SQL Query select T6303.PROD_SUBCATEGORY_DESC as c1, sum(t6826.quantity_sold) as c2 from SH_COPY.PRODUCTS T6303, SH_COPY.SALES_BIG T6826 where ( T6303.PROD_ID = T6826.PROD_ID ) group by T6303.PROD_SUBCATEGORY_DESC order by c1
Interpreting a Level 3 Log File : Part 2 +++Administrator:2c0000:2c0003:----2008/10/02 13:32:39 -------------------- Query Status: Successful Completion +++Administrator:2c0000:2c0003:----2008/10/02 13:32:39 -------------------- Rows 21, bytes 84504 retrieved from database query id: <<11172>> +++Administrator:2c0000:2c0003:----2008/10/02 13:32:39 -------------------- Physical query response time 194 (seconds), id <<11172>> +++Administrator:2c0000:2c0003:----2008/10/02 13:32:39 -------------------- Physical Query Summary Stats: Number of physical queries 1, Cumulative time 194, DB-connect time 0 (seconds) +++Administrator:2c0000:2c0003:----2008/10/02 13:32:39 -------------------- Rows returned to Client 21 +++Administrator:2c0000:2c0003:----2008/10/02 13:32:39 -------------------- Logical Query Summary Stats: Elapsed time 195, Response time 194, Compilation time 0 (seconds)
Interpreting a Level 3 Log File : Part 2 Query outcome status +++Administrator:2c0000:2c0003:----2008/10/02 13:32:39 -------------------- Query Status: Successful Completion +++Administrator:2c0000:2c0003:----2008/10/02 13:32:39 -------------------- Rows 21, bytes 84504 retrieved from database query id: <<11172>> +++Administrator:2c0000:2c0003:----2008/10/02 13:32:39 -------------------- Physical query response time 194 (seconds), id <<11172>> +++Administrator:2c0000:2c0003:----2008/10/02 13:32:39 -------------------- Physical Query Summary Stats: Number of physical queries 1, Cumulative time 194, DB-connect time 0 (seconds) +++Administrator:2c0000:2c0003:----2008/10/02 13:32:39 -------------------- Rows returned to Client 21 +++Administrator:2c0000:2c0003:----2008/10/02 13:32:39 -------------------- Logical Query Summary Stats: Elapsed time 195, Response time 194, Compilation time 0 (seconds)
Interpreting a Level 3 Log File : Part 2 Query outcome status +++Administrator:2c0000:2c0003:----2008/10/02 13:32:39 -------------------- Query Status: Successful Completion +++Administrator:2c0000:2c0003:----2008/10/02 13:32:39 -------------------- Rows 21, bytes 84504 retrieved from database query id: <<11172>> Physical query reponse time +++Administrator:2c0000:2c0003:----2008/10/02 13:32:39 -------------------- Physical query response time 194 (seconds), id <<11172>> +++Administrator:2c0000:2c0003:----2008/10/02 13:32:39 -------------------- Physical Query Summary Stats: Number of physical queries 1, Cumulative time 194, DB-connect time 0 (seconds) +++Administrator:2c0000:2c0003:----2008/10/02 13:32:39 -------------------- Rows returned to Client 21 +++Administrator:2c0000:2c0003:----2008/10/02 13:32:39 -------------------- Logical Query Summary Stats: Elapsed time 195, Response time 194, Compilation time 0 (seconds)
Interpreting a Level 3 Log File : Part 2 Query outcome status +++Administrator:2c0000:2c0003:----2008/10/02 13:32:39 -------------------- Query Status: Successful Completion +++Administrator:2c0000:2c0003:----2008/10/02 13:32:39 -------------------- Rows 21, bytes 84504 retrieved from database query id: <<11172>> Physical query reponse time +++Administrator:2c0000:2c0003:----2008/10/02 13:32:39 -------------------- Physical query response time 194 (seconds), id <<11172>> +++Administrator:2c0000:2c0003:----2008/10/02 13:32:39 -------------------- Physical Query Summary Stats: Number of physical queries 1, Cumulative time 194, DB-connect time 0 (seconds) Rows returned to client +++Administrator:2c0000:2c0003:----2008/10/02 13:32:39 -------------------- Rows returned to Client 21 +++Administrator:2c0000:2c0003:----2008/10/02 13:32:39 -------------------- Logical Query Summary Stats: Elapsed time 195, Response time 194, Compilation time 0 (seconds)
Additional Log Information at Level 5 : Logical Execution Plan RqList <<9790>> [for database 3023:2350:ora11g,44] PRODUCTS.PROD_SUBCATEGORY_DESC as c1 GB [for database 3023:2350,44], sum(sales_big.quantity_sold by [ PRODUCTS.PROD_SUBCATEGORY_DESC] ) as c2 GB [for database 3023:2350,44] Child Nodes (RqJoinSpec): <<9851>> [for database 3023:2350:ora11g,44] PRODUCTS T6303 SALES_BIG T6826 DetailFilter: PRODUCTS.PROD_ID = SALES_BIG.PROD_ID and (PRODUCTS.PROD_SUBCATEGORY_DESC = 'Camcorders' or PRODUCTS.PROD_SUBCATEGORY_DESC = 'Camera Batteries' or PRODUCTS.PROD_SUBCATEGORY_DESC = 'Camera Media') [for database 0:0] GroupBy: [ PRODUCTS.PROD_SUBCATEGORY_DESC] [for database 3023:2350,44] OrderBy: c1 asc [for database 3023:2350,44] +++Administrator:2b0000:2b0001:----2008/10/02 14:04:10
Additional Log Information at Level 5 : Logical Execution Plan Logical Execution Plan RqList <<9790>> [for database 3023:2350:ora11g,44] PRODUCTS.PROD_SUBCATEGORY_DESC as c1 GB [for database 3023:2350,44], sum(sales_big.quantity_sold by [ PRODUCTS.PROD_SUBCATEGORY_DESC] ) as c2 GB [for database 3023:2350,44] Child Nodes (RqJoinSpec): <<9851>> [for database 3023:2350:ora11g,44] PRODUCTS T6303 SALES_BIG T6826 DetailFilter: PRODUCTS.PROD_ID = SALES_BIG.PROD_ID and (PRODUCTS.PROD_SUBCATEGORY_DESC = 'Camcorders' or PRODUCTS.PROD_SUBCATEGORY_DESC = 'Camera Batteries' or PRODUCTS.PROD_SUBCATEGORY_DESC = 'Camera Media') [for database 0:0] GroupBy: [ PRODUCTS.PROD_SUBCATEGORY_DESC] [for database 3023:2350,44] OrderBy: c1 asc [for database 3023:2350,44] +++Administrator:2b0000:2b0001:----2008/10/02 14:04:10
Linking OBIEE Queries to Database Transactions The NQSQuery.log file can give you the physical SQL for a query Using this, you can generate a DB execution plan to check index usage etc If you monitor the database at the same time, you can potentially measure load on DB At present, manually done, difficult to do In future, Grid Control could link OBIEE requests to DB transactions? Balance of DB independence with End-to-end tuning capability
One for the Future? OBIEE Request Advisor? It should be possible to run a wizard from BI Administrator or BI Management Pack to take a set of requests and optimize them Suggest summaries at BI Server or Database level Recommend indexes, Recommend caching Will probably need something at the BI Server level, equivalent of ADDM/Advisors
The system isn't working Someone tells you OBIEE is down - what do you do? In the database / application server world, you would typically have alerts, plus scripts or steps you can perform to check status and diagnose alert logs Grid Control or O/S-specific commands Quickly establish which component is down, and what caused the downtime
Server Downtime Diagnosis using OBIEE The BI Server and Presentation Server logs provide a record of exits, crashes Needs access to the server though Grid Control BI Management Pack provides graphical view of server status Status history recorded in Management Server Repository Alerts can be sent if server down Restart can be scripted
BI Management Pack Service Level and Configuration Reports BI Management Pack can be configured to test service levels Configuration of BI Server, Presentation Server etc can be recorded at compared with previous configurations Service-level test can trigger action if dashboard unavailable Configurations can be checked to see if something is stopping the BI Server running
Proactively Spotting Performance Problems Several sources exist for spotting performance drops before the users do Usage tracking tables, reports Time-series query, compare key report runtimes against historical runtimes Trigger an ibot if performance <25% week-on-week Grid Control Management Pack can set thresholds, generate alerts based on server load, service level response time etc
So Now I Know Where The Problem Is, What Can I Do About It? So now you ve established the cause of the report or system slowdown What does OBIEE (and the database) provide to address this? Key approach should be to address performance issues as close to the data as is possible, moving processing down the stack to the data XSL Oracle BI Publisher Delivery Server Layout Interfaces Data Logic Web Server (IIS, Tomcat, Websphere, iplanet) Oracle BI Presentation Services SOAP Web Services, XML and URL Interface Oracle Interactive Dashboards User Profiling, Security and Session Mngmt Cache Services (Web) & Connection Mngmt SAW Bridge (J2EE/ISAPI) Oracle Answers TCP/IP (SSL) Web Catalog Service XML Framework XML, HTML, XLS, PDF, TXT over HTTP/HTTPS HTML, SOAP over HTTP/HTTPS Web Browser Javascript for Usability & Interactivity External Applications and Portals Oracle Delivers Server Scheduling/Event Services Agent Execution Logic Device Adaptive Content Oracle BI Cluster Controller Externalized Authentication LDAP DB Authentication Custom Authenticator TCP/IP (SSL) Oracle BI Server Logical SQL ODBC/JDBC (Logical Business Model) Security Services Query Govern. Cache Services Load Balancer Session Management Intelligent Request Generation Logical Request Generation Navigator ODBC over TCP/IP (SSL) Multi-Pass / Sub-Request Logic Fragmentation Optimization Aggregate Navigator Optimized Query Rewrites Execution Engine Metadata Interchange System / Perf Monitoring Oracle BI Administration Metadata Management Services Multi-User Development Services Metadata Documentation Services Server Management Services Data Source Adapters ODBC, CLI, OCI, XML, MDX Analytical and Operational Data Sources
Overall OBIEE Performance Tuning Goal To perform aggregations, filtering, functions as close to the data as possible Use an ETL process and a data warehouse to pre-integrate data Joins are less expensive, avoids needs for in-memory cross-database joins Create and manage summaries in the database Can benefit from advanced features such as incremental refresh, query rewrite Make your physical database schema match your logical model as closely as possible Create a conformed, denormalized, dimensional data warehouse Consider using an OLAP server to handle aggregates, cross-dimension queries, advanced calculations, time-series calculations The more you can offload to the underlying database, the better But where you can t, there are a number of OBIEE features that can compensate...
Oracle BI Server Performance and Integration Features Driving Tables and Parameterized Nested Loop Joins Summary Management, and the Aggregate Persistence Wizard Ability to Leverage Oracle Essbase, Oracle OLAP, MS AS Caching and Cache Management
Federated Queries If data for a query needs to come from >1 source, the BI Server can perform cross-database joins, or federated queries Data is sourced from each individual database, and then stitched together to create a single results set Powerful feature, but use with care All tables are normally loaded into memory, and then joined Source tables should therefore normally be relatively small Usually much more expensive than in-database joins The performance of certain federated queries can be improved through the use of Driving Tables and Parameterized Nested Loop Joins (PNLJ)
Driving Table Choice and Parameterized Nested Loop Joins By default, federated queries will cause both tables to be loaded into the BI Server memory, and then joined Can be expensive if one or more of the tables are large An alternative is to designate one of the tables as the driving table Needs to be small, typically <1000 rows This table is then queried and filtered first, then it s remaining rows are then applied as a filter to the second table Driving table is defined in the Logical Join dialog
Parameterized Nested Loop Join Algorithm 1. Start reading rows from the driving table 2. Submit a parameterized query request to non-driving table 3. Bind values from the drive-table rows into the parameterized query and fetch results 4. Repeat until all rows from drive table are processed SELECT Times.Quarter, Branches.Branch Name, Units.Units Sold FROM "Sales" ORDER BY saw_0, saw_1, saw_2 Select T34334."Branch Name" as c1, T34334."Branch ID" as c2 from "Branch_Sheet$" T34334 select T34531.QUARTER_NAME as c2, T34533.BRANCH_ID as c5, sum(t34533.units_sold) as c6 from TIMES T34531, UNITS T34533 where (T34531.TIME_ID = T34533.TIME_ID and (T34533.BRANCH_ID in (:PARAM1, :PARAM2, :PARAM3, :PARAM4, :PARAM5, :PARAM6, :PARAM7, :PARAM8, :PARAM9, :PARAM10) group by T34533.BRANCH_ID, T.34531.QUARTER_NAME order by c5
Driving Table and PNLJ Limitations Only one join in a query can be a drive-table join Optimizer treats as a hint, not a directive Join expression must be compatible Non-drive table SQL must support parameters Controlled by two NQSCONFIG.INI parameters MAX_PARAMETERS_PER_DRIVE_JOIN MAX_QUERIES_PER_DRIVE_JOIN
Adding Summary Data to the Logical Model Summary tables can be introduced to the Business Model and Mapping Layer Pre-summarized data that is then used when queries request aggregates Useful if it s not possible to use materialized views or their equivalent Useful as a way of aggregating logical tables sourced from >1 physical database
The Aggregate Persistence Wizard Automates the process of creating and mapping summary tables Pick one or more measures and dimension levels to aggegate on Generates internal scripting code, Execute script using nqcmd.exe Generates RDBMS tables, inserts summary data, maps to logical model Aggregates can be stored in different RDBMS than source data
Leveraging Essbase and Other OLAP Servers Data used for OBIEE reporting can also be sourced from OLAP servers Essbase, Oracle OLAP, Microsoft Analysis Services, SAP B/W Use as an alternative aggregation and calculation engine Data plugs directly into OBIEE metadata layer, converted to relational tables and columns Integrate with, or replace existing relational sources Essbase 11.1.x can also source cubes from OBIEE metadata layer Extract via RPD, cube, then re-import into OBIEE
The Oracle BI Server Cache Stores local copy of data that is requested Can be pre-seeded by ibots (agents) Off by default, enabled using NQSConfig.ini file Requires that you put a cache management strategy in place Particularly suited to DW environments Cache is not aware of when data becomes stale Beware of this being your only performance optimization strategy Only solves a subset of problems, addresses the symptoms rather than the cause
Summary The OBIEE Technology stack is potentially complex, with many components Follow best practice from other Oracle areas, and diagnose before optimizing Decide on whether problem is system-wide or specific to a particular report Key objective in tuning is to push as much down to the database as possible When this is not possible, several techniques are available to optimize performance Various logging and diagnostic tools are available Techniques are possible to reduce the cost of cross-database joins Aggregates can be created and maintained by the BI Administration tool Aim at the end should be to offload as much work to the database, OLAP server or ETL process is is possible; when this cannot be done, understand how the BI Server performs complex queries, and how to diagnose and detect problems.
Optimizing Oracle BI EE Performance Mark Rittman, Director, Rittman Mead Consulting UKOUG Conference & Exhibition 2008