Database Monitoring and Performance Tuning



Similar documents
Only for Data Group Students Do not share with outsiders and do not use for commercial purposes.

PERFORMANCE TUNING IN MICROSOFT SQL SERVER DBMS

CHAPTER 8: OPTIMIZATION AND TROUBLESHOOTING

One of the database administrators

Optimizing Performance. Training Division New Delhi

SOLIDWORKS Enterprise PDM - Troubleshooting Tools

MS SQL Performance (Tuning) Best Practices:

Improve Business Productivity and User Experience with a SanDisk Powered SQL Server 2014 In-Memory OLTP Database

Solving Performance Problems In SQL Server by Michal Tinthofer

Optimising SQL Server CPU performance

Analyzing & Optimizing T-SQL Query Performance Part1: using SET and DBCC. Kevin Kline Senior Product Architect for SQL Server Quest Software

Performance Monitoring with Dynamic Management Views

Microsoft SQL Server performance tuning for Microsoft Dynamics NAV

DB Audit Expert 3.1. Performance Auditing Add-on Version 1.1 for Microsoft SQL Server 2000 & 2005

WITH A FUSION POWERED SQL SERVER 2014 IN-MEMORY OLTP DATABASE

Module 15: Monitoring

Chapter 15: AppInsight for SQL

Using Microsoft Performance Monitor. Guide

Using Database Performance Warehouse to Monitor Microsoft SQL Server Report Content

Introduction. Part I: Finding Bottlenecks when Something s Wrong. Chapter 1: Performance Tuning 3

Dynamics NAV/SQL Server Configuration Recommendations

Query Performance Tuning: Start to Finish. Grant Fritchey

"Charting the Course... MOC AC SQL Server 2014 Performance Tuning and Optimization. Course Summary

Database Maintenance Essentials

StreamServe Persuasion SP5 Microsoft SQL Server

DMS Performance Tuning Guide for SQL Server

FIGURE Selecting properties for the event log.

Course 55144: SQL Server 2014 Performance Tuning and Optimization

Course 55144B: SQL Server 2014 Performance Tuning and Optimization

SQL Server 2008 Designing, Optimizing, and Maintaining a Database Session 1

VirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5

Running a Workflow on a PowerCenter Grid

Administering Microsoft SQL Server 2012 Databases

Also on the Performance tab, you will find a button labeled Resource Monitor. You can invoke Resource Monitor for additional analysis of the system.

Reviewing Microsoft SQL Server 2005 Management Tools

W I S E. SQL Server 2008/2008 R2 Advanced DBA Performance & WISE LTD.

MyOra 3.0. User Guide. SQL Tool for Oracle. Jayam Systems, LLC

General DBA Best Practices

Optimizing Your Database Performance the Easy Way

About Me: Brent Ozar. Perfmon and Profiler 101

VirtualCenter Database Maintenance VirtualCenter 2.0.x and Microsoft SQL Server

3 Setting up Databases on a Microsoft SQL 7.0 Server

MyOra 3.5. User Guide. SQL Tool for Oracle. Kris Murthy

Microsoft SQL Server OLTP Best Practice

Microsoft SQL Server 2008 Step by Step

Resource Governor, Monitoring and Tracing. On SQL Server

Response Time Analysis

Enhancing SQL Server Performance

PERFORMANCE TUNING FOR PEOPLESOFT APPLICATIONS

W I S E. SQL Server 2012 Database Engine Technical Update WISE LTD.

13 Managing Devices. Your computer is an assembly of many components from different manufacturers. LESSON OBJECTIVES

Performance Counters. Microsoft SQL. Technical Data Sheet. Overview:

Users are Complaining that the System is Slow What Should I Do Now? Part 1

Deployment Planning Guide

The Complete Performance Solution for Microsoft SQL Server

Moving the TRITON Reporting Databases

Microsoft SQL Server: MS Performance Tuning and Optimization Digital

Idera SQL Diagnostic Manager Management Pack Guide for System Center Operations Manager. Install Guide. Idera Inc., Published: April 2013

The 5-minute SQL Server Health Check

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

IBM Tivoli Monitoring Version 6.3 Fix Pack 2. Infrastructure Management Dashboards for Servers Reference

Oracle Database 11 g Performance Tuning. Recipes. Sam R. Alapati Darl Kuhn Bill Padfield. Apress*

SQL diagnostic manager Management Pack for Microsoft System Center. Overview

Customer evaluation guide Toad for Oracle v12 Database administration

SQL Server Performance Assessment and Optimization Techniques Jeffry A. Schwartz Windows Technology Symposium December 6, 2004 Las Vegas, NV

SQL Server Performance Tuning and Optimization

Destiny performance monitoring white paper

Virtuoso and Database Scalability

Server Manager Performance Monitor. Server Manager Diagnostics Page. . Information. . Audit Success. . Audit Failure

Microsoft SQL Server Installation Guide

SQL Server Performance Tuning for DBAs

Working with SQL Server Agent Jobs

Install and Configure SQL Server Database Software Interview Questions and Answers

Working with SQL Profiler

Toad for Oracle 8.6 SQL Tuning

Citrix EdgeSight for Load Testing User s Guide. Citrix EdgeSight for Load Testing 3.8

Oracle Database 12c: Performance Management and Tuning NEW

Installation and Operation Manual Unite Log Analyser

TANDBERG MANAGEMENT SUITE 10.0

EVENT LOG MANAGEMENT...

vrops Microsoft SQL Server MANAGEMENT PACK User Guide

A Performance Engineering Story

EMC Unisphere for VMAX Database Storage Analyzer

WW TSS-02\03 MS SQL Server Extended Performance & Tuning

CHAPTER. Monitoring and Diagnosing

Case Study: Load Testing and Tuning to Improve SharePoint Website Performance

Microsoft SQL Server Installation Guide

Moving the Web Security Log Database

SQL Server 2014 Performance Tuning and Optimization 55144; 5 Days; Instructor-led

Microsoft SQL Server Decision Support (DSS) Load Testing

Storage and SQL Server capacity planning and configuration (SharePoint...

Database Studio is the new tool to administrate SAP MaxDB database instances as of version 7.5.

Oracle Database 10g. Page # The Self-Managing Database. Agenda. Benoit Dageville Oracle Corporation benoit.dageville@oracle.com

MS SQL Server 2014 New Features and Database Administration

HP LeftHand SAN Solutions

SAP Business Objects Business Intelligence platform Document Version: 4.1 Support Package Data Federation Administration Tool Guide

ProSystem fx Engagement. Deployment Planning Guide

SQL Server 2012 Query. Performance Tuning. Grant Fritchey. Apress*

Monitoring, Tuning, and Configuration

How to overcome SQL Server maintenance challenges White Paper

Transcription:

Written by Zakir Hossain, CS Graduate (OSU), OCP, OCA, MCDBA, MCITP-DBA, Security+, Oracle RAC-Admin, Oracle Backup/Recovery-Admin, Oracle Performance/Monitoring-Admin, Oracle App Server-Admin, System Admin (Windows/RedHat), SCJP (Sun Certified Java Programmer), ITIL V3, PMP CEO/President, Data Group Manager Enterprise Architecture & Database, DOD -:- NOTE -:- Copy Right: Alter, printing, sharing with anybody, any training institute/provider or commercial use without written permission is 100% prohibited under the Federal Copy Right Violation of Intelligence Product. Warning: Violators will be prosecuted with the fullest extent of Federal Law. Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (1/43)

We need to monitor database to determine if a database is handling workload efficiently or not. However, you must need to set your goal and appropriate tool to monitor. If database is running efficiently use captured data to establish performance baseline. If database is not running efficiently, use captured data to determine how to improve performance. Here are some examples, Example 1: Say you are monitoring response time for frequently used queries; you can determine whether changes to the query or indexes on tables are necessary. Example 2: If you are concerned about security, use captured data to determine user activity. In this case, you can monitor SQL commands users are attempting to run So, a database is not operating properly, use captured data to troubleshoot problems or debug application components such as stored procedure, functions, cursor, database connections from applications running against a database, run automated jobs at different time etc. Reasons to Monitor Database Server: 1. To Find problems or error, suspicious behavior, to manage security, to manage disk utilization, to solve users and developers requests and problems, and to solve any other unknown problems on a day to day basis during the operations 2. To troubleshoot problems and errors proactively as well as reactively 3. To Monitor Server Performance 4. To Trace user activities Here are some examples of why you need to monitor: 1. If users are having problem in connecting to your database server, you need to find out the reasons of the problem. 2. To improve database performance. Solution of this problem would be minimizing the time it takes for users to see the results of queries and maximize/increase the number of queries that server can handle simultaneously. Performance of a database depends on many factors not only on the performance of a database. Here is a list of Top Issues that could affect performance of a database: Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (2/43)

1. Operation system: version of OS, Number of Background process running concurrently impacts the performance of a database. 2. Application performance: Any applications (Example, POS Terminal), web based applications (ecommerce Sites, Database Driven Websites), MS access database accessing SQL Server Database. So, any applications using a database in the backend would affect performance of a database as well if the application is not optimized, badly written query, heavily used 3. Memory: Memory Amount, Memory Speed, and memory type plays a vital role for performance of a system including a database. All Database Server requires having enough memory to work properly. A database performance is suffering from Memory bottleneck, if a. Page Life Expectancy Counter is low consistently. Average Page Life Expectancy Counter is in SQL Server Buffer Manager. This represents the average number of seconds a page stays in cache. For OLTP database, an average page life expectancy is 300 seconds or 5 minutes. Anything less than that could indicate memory pressure, missing indexes, or a cache flush. b. Big drop in Page Life Expectancy. OLTP applications/databases should have a steady (or slowly increasing) page life expectancy. c. Memory Grants. Memory Grants Pending counter is in SQL Server Memory Manager. Small OLTP transactions should not require a large memory grant d. Cache Hit Ratio. OLTP applications should have a high Cache Hit Ratio. OLTP transactions are small, so there should not be i. a big drops in SQL Cache Hit Rates or ii. Cache Hit Rates < 90%. Drops or low cache hit may indicate memory pressure or missing indexes Solution: Solution for PLE: A high memory is most cases the solution to ensure Page Life Expectancy (PLE) remains over 300. If we see PLE is between zero (0) and 100 or below 100, they definitely it is the bottleneck from memory. Need to find out the currently executing queries and their memory usages. Determining PLE: 1. Click Start > Type perfmon 2. SelectMSSQLServer:Buffer Manager 3. Highlight "Page Life Expectancy" and click on Add Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (3/43)

4. CPU: A database performance is suffering from CPU bottleneck, if a. Signal waits > 25% of total waits. We can use sys.dm_os_wait_stats for Signal waits and Total waits. Signal waits measure the time spent in runnable queue waiting for CPU. High signal waits indicate a CPU bottleneck. b. Plan re-use < 90%. A query plan is used to execute a query. Plan re-use is desirable for OLTP databases because re-creating the same plan for similar or identical transactions is a waste of CPU resources. Look for SQL Statistics: batch requests/sec to SQL compilations/sec. Plan Re-use Calculation: Plan Re-use = (Batch requests - SQL compilations) / Batch requests. c. Zero cost plans will not be cached (not re-used) in SQL 2005 SP2. Applications that use zero cost plans will have a lower plan re-use but this is not a performance issue. d. Parallel wait type packet > 10% of total waits. This type of problem will occur from CPU speed. e. How to determine above problems: CPU Historically Low: It could be from long running query or run away query. To find this use sp_who, sp_who2, or sp_who3 and look for connections that have a SPID > 50 and have high CPU time. Remember sp_who2 and sp_who3 shows a cumulative time for an entire connection. Keep in mind that if a connection is open for few days, it is highly possible that it will have high CPU time. Once you have found the suspected SPID, run the following query to find what they are executing: DECLARE @SPID INT SET @SPID=YOUR_SPID DBCC INPUTBUFFER(@SPID) This will display what is being executed. If it is something you think needs to kill, we use the following command: DECLARE @SPID INT SET @SPID=YOUR_SPID KILL @SPID CPU Historically High: CPU time could be historically high, if a database does not correct or all required indexes. We can use the following query in Query Analyzer: USE sql_programming; Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (4/43)

GO SELECT SCHEMA_NAME(SCHEMA_ID) AS Schema_Name, Name AS Table_Name FROM sys.tables WHERE OBJECTPROPERTY(OBJECT_ID, 'IsIndexed')=0 ORDER BY Schema_Name, Table_Name; GO We can use following queries to find all the indexes of a specific table like customers as shown below: EXEC sp_helpindex Customers; We can use following query, to list all indexes, EXEC sp_helpindex If a table/database does not correct or required indexes, this will become an issue of table scan. A table scan is the reason of High Memory IO, High Disk IO, and High CPU Time. Random Execution Plan: This is another bid reason, which could cause a high CPU consumption. To determine this type of issue, we need to look for CPU time and amount of Disk IO is being used for each session. Remember, Random Execution Plan is also called a Spinning Execution Plan. Example, SPID BDName Command CPUTime DiskIO 78 sql_programming SELECT 2349813 0 84 SIS_ETL SELECT INTO 0 0 So, we see high CPU Time and little or no IO. To solve this type of problem we need to rewrite the query. 5. Hard Disk (HD): HD type, speed, and number of physical disk also affects database performance. HD configuration such as type of RAID, SAN or NAS has a big impact on database performance. In most cases a database in under pressure of Intensive Read and disk is underperformance. It is mostly because of does not have enough memory, and does not have correct and all required indexes. This will result into High Disk Queue Length and we can say it is from HD if, 1. CPU % is very low because it is waiting for Disk 2. Server is write Intensive, if a subscriber needs to replicate a large replication Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (5/43)

3. Import processes are writing using FULL TRUNCATE and FULL INSERT rather than just changes 4. DELETE is used where we could use TRUNCATE option So, if CPU is less than 30% and PLE>300, it is definitely a HD problem How to Determine Hard Disk Problem: 1. Open perfmon 2. Select Physical Disk and Look for A. % Idle Time <30% B. Avg Disk/Sec Transfer >.100 C. Disk Queue Length < 20 6. Paging: If page file usage swap happens often that would impact database performance heavily. If OS suffers from external memory pressure, it will steal memory from non system level processes. In that case, non system level processes will find their working set swapped to paging file. Page swapping should be under 8%. However, SQL Server survive until 10%. If it is above 10%, it could affect database performance severely. Determining Paging: 1. Open perfmon 2. Select Paging File 3. Select both % Usage Peak and % Usage NOTE: % Usage Peak: Indicates the peak amount that has been used since server last rebooted Solution: 1. Reboot the system 2. Restart Database Server Instance (Sometimes it solves the problem) 3. Turn on AWE: Automatic Memory Mapping. In OS 64 bit system, in most cases, it solves the problem automatically. However, in OS 32 bit system, if you suffer from page swapping, turn on AWE 4. Try to avoid moving/copying file from the SQL Server System using the SQL Server Machine. Rather use the other Machine to get the file from the SQL Server Machine. 7. Other Services/Instances: If there is anything else running on the same system like web server, Reporting Server like SQL Server Reporting Service FTP Server, Telnet Server, acting as a remote server, people access remotely to the system, multiple instance, heavy uses of databases will degrade database performance 8. Network Related Issues: Network speed/bandwidth itself could degrade the performance. For an example, if most of your database clients connecting to the DB outside of the LAN (Local Area Network), does not Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (6/43)

have good network connection and also trying to perform to much updates on the system will easily affect the performance of your database. a. High Network Latency: An application requires many round trips to the database if Network Latency is high. Network Latency is measured either one-way (time to send a packet from source to destination receiving the packet), or round-trip (the one-way latency from source to destination plus the one-way latency from the destination back to the source). b. Network bandwidth: Look for counters packets/sec and current bandwidth counters in the network interface object of Performance Monitor 9. Blocking bottleneck if, a. Top wait statistics are LCK_M_X. Use DMV sys.dm_os_wait_stats to find b. High number of deadlocks. Use Profiler Graphical Deadlock under Locks event to identify the statements involved in the deadlock c. High average row lock or latch waits. The average row lock or latch waits are computed by dividing lock and latch wait time (ms) by lock and latch waits. Use DMV sys.dm_db_index_operational_stats to find average time for each block 10. IO Bottleneck if, a. Table Scan: Big IOs occurs if there is any table and range scans due to missing indexes. By definition, OLTP transactions should not require big IOs and should be examined. b. Top wait statistics in sys.dm_os_wait_stats are related to IO such as ASYNCH_IO_COMPLETION, IO_COMPLETION, LOGMGR, WRITELOG, or PAGEIOLATCH_x c. Disk seconds per read: When IO subsystem is queued, disk seconds per read increases. Look for counter Logical or Physical disk (disk seconds/read counter). Normally it takes 4-8ms to complete a read when there is no IO pressure. The value for disk seconds/read >15 ms, indicates a disk bottleneck. When the IO subsystem is under pressure due to high IO requests, the average time to complete a read increases, showing the effect of disk queues. Periodic higher values for disk seconds/read may be acceptable for many applications. For high performance OLTP applications, sophisticated SAN subsystems provide greater IO scalability and resiliency in handling spikes of IO activity. d. Disk seconds per write: Look for counter Logical or Physical disk (disk seconds/read counter). A transaction log write can be as fast as 1ms (or less) for high performance SAN environments. However, sustained high values for average disk seconds/write is an Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (7/43)

indicator of a disk bottleneck. The throughput for high volume OLTP applications is dependent on fast sequential transaction log writes. For many applications, a periodic spike in average disk seconds per write is acceptable considering the high cost of sophisticated SAN subsystems. 11. Blocking: Blocking is caused by contention of resources. The resources are CPU, Memory and HD (Hard Disk). To understand blocking, we need to understand locking mechanism in the database. Locking mechanism of a database occurs to ensures users can see only correct and up-to-date information/rows/records from a table. For an example, if records are being updated are also being shown to a users before the update is complete, the incorrect information is being displayed to the users. Blocking occurs when a update or delete statement performs a table scan, while a lock like a select statement tries to read the same records. The most common reason for this a table does not have correct index or do not have index at all. It could be another reason if a query is not using an existing indexes. To solve this problem, we need to find the blocker/blockers and kill the blocker. A blocker can be killed either using Management Studio/Enterprise Manager or Query Analyzer. To kill using Query Analyzer, we issue following command: KILL SPID KILL 5890 -- To kill a blocker, which has SPID of 5890 Once the SPID has been killed, find if it is missing an index. If it is missing the index, add an appropriate index. This will solve the problem. 12. Old Statistics or No Statistics: If a database has out dated statistics, it will produce an incorrect execution plan for SQL/T-SQL query/program. This type of problem is very hard to find/determine. So, the best solution for this type of problem is being a proactive rather than being a reactive. To act as a reactive, we need to turn on "Auto Create Statistics" and "Auto Update Statistics" for a database. It is highly recommended and also before start trouble shooting, make sure we check this first. It is also a good practice to rebuild or recreate indexes periodically. However, it could be an intensive process to run this type of activities during normal work hour as it uses resources Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (8/43)

heavily. To determine if it is from old statistics or no statistics, if you experience a gradual decline/slow down of a database performance over the period of many weeks/months/years. You could also face this type of issue if you upgrade from SQL Server 2000 to SQL Server 2005 or 2008 and forgot to update the statistics. 13. Database Issues: Database itself could suffer from performance related issues. Database Issue: With any database system there are a couple of key issues that could affect database performance. So, in order to improve performance of a database, we need to pay attention to the following key issues: 1. Bad Database Design: Good database design is a key to have a good performance of a database. A Database Design would consider as a bad database design if a. Too Many Joins: Frequently used queries join too many tables would consider as too many joins. Overuse of joins in an OLTP application will take long time to display results and will waste system resources. If frequently used queries require 5 or more table joins, should consider redesigning the database. b. Too Many Indexes: Frequently updated tables (inserts, updates and deletes) have too many indexes would require additional index maintenance overhead. Generally, OLTP database designs should have minimum number of indexes for similar type of transactions. c. Unused Indexes: Unused indexes becomes the cost of index maintenance for inserts, updates, and deletes without benefiting any users. Unused indexes should be deleted. Any index that has been used by select, update or delete operations will appear in sys.dm_db_index_usage_stats. Thus, any defined index not included in this DMV has not been used since the last re-start of SQL Server database and those indexes should be deleted. d. Table Scan: Big IOs occurs if there is any table and range scans due to missing indexes. By definition, OLTP transactions should not require big IOs and should be examined. 2. Poorly Written SQL/T-SQL Program: Properly written SQL code is another key to have a good performance of an application since it will require less response time to get data from a database. Tracing/Finding Poorly Written Query and Solving the Problem: Get the query from the developer, use any of the following tools to find the problem with the query: A. Query Execution Plan: This would help you to trace how the query get executed behind the scene. B. Profiler: We can replay the query and trace the problem with the query Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (9/43)

C. Database Tuning Adviser: D: Index Tuning Advisor: Only available on SQL Server 2000 and also available on 2005. 3. Incorrect indexes: Create index only, if you need it. You may have an index in a table, however, your application may not be using the index. So, if your application is not using any index, do not create any index. Key points to consider when creating an index: a. Create an index if it is required by the applications using the database b. Create a clustered index for each table c. Keep indexes as narrow as possible d. Try to create indexes on column that have integer values rather than character values. It is because indexes on integer values usually smaller than the indexes created on characters values. It also takes less time to scan index created on numeric values than the index created on character values as indexes created on created on character values compare each character e. Drop indexes that are not used. We can use index wizard to identify indexes that are not used in queries f. Do not create indexes on column which value has low in query selecting g. If our application updates data very frequently, we need to limit the number of indexes h. If we need to join several tables very frequently, create index on the joined columns. You should first consider creating a clustered index, before creating any non-clustered index. When considering creating a clustered index, try to create the index on column using numeric, or INT data type. As we know that clustered index sorts data as we insert records in a table 4. Fragmented Index: Fragmentation is a situation where storage space is used inefficiently, takes more storage and in most cases reduces the performance of your database. How we know that index is fragmented: a. If we experience same query or application is taking more time than it is used to take before it tells that either an index does not sexist or the existing index has fragmented. if it is fragmented we need to defrag the index or indexes b. By using system function: sys.dm_db_index_physical_stats c. We can detect fragmentation of a specific index, all indexes on a table or indexes view in a database or all indexes in all database How to defrag indexes: We can defrag indexes in two ways: a. Rebuild or Reorganize indexes Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (10/43)

b. Drop and recreate indexes 5. Update Statistics: Current Updated Statistical information helps to process query efficiently. So, UPDATE STATISTICS command should be issued on a regular basis to provide SQL Server the most recent data. Auto Update Statistics feature should be enabled for small database. However, this feature should be disabled for large database since this could cause performance issues if the statistics of a large table are updated during the middle of the day. Auto Update Statistics configuration is a per database basis with the ability to not automatically recompute indexes on larger tables. For this you need to use UPDATE STATISTICS command with NORECOMPUTE option. Some examples of UPDATE STATISTICS command: To update statistics, we can use T_SQL command or Stored Procedure USE SQL_PROGRAMMING; Example of updating Statistics using Stored Procedure: -- Display status of all statistics of a table EXEC sp_autostats Employee ; -- Enable AUTO_UPDATE_STATISTICS for all statistics on a table EXEC sp_autostats Employee, ON ; -- Disable AUTO_UPDATE_STATISTICS for a specific index EXEC sp_autostats Employee, OFF, Employee_ID; Example of updating Statistics using T-SQL: -- Update Statistics using a specific table UPDATE STATISTICS Employee; -- Update Statistics using of a specific table using sample UPDATE STATISTICS Candidate WITH SAMPLE 50 PERCENT; -- Update Statistics of a specific table using FULL ROW SCAN and disable auto-update Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (11/43)

UPDATE STATISTICS Employee WITH FULLSCAN, NORECOMPUTE; -- UPDATE STATISTICS of an index of a table UPDATE STATISTICS employee Employee_ID; -- UPDATE STATISTICS, but disable auto-update UPDATE STATISTICS Company WITH NORECOMPUTE; -- Create STATISTICS on certain column of a table CREATE STATISTICS Employee ON employee (Employee_ID, Company_ID) WITH SAMPLE 50 PERCENT Tools we can use to Monitor and Tune-up Database Performance: Product name SQL SVR Windows Network Monitoring Tuning Reporting Alerting Advice SQL Server Profiler SQL Server Database Tuning Advisor SQL Server Index Tuning Advisor Microsoft Performance Monitor Idera SQL DM Idera SQL Doctor RedGate SQL Monitor Quest FogLight Quest SpotLight Third Party Commercial Tool: Idera SQL Diagnostic Manager (DG): http://www.idera.com/products/sql-server/sql-diagnostic-manager/ Quest Foglight Performance Analysis For SQL Server: http://www.quest.com/foglight-performance-analysis-for-sql-server/ Quest spotlight for SQL Server Enterprise: http://www.quest.com/spotlight-on-sql-server-enterprise/ RedGate SQL Monitor: http://www.red-gate.com/products/dba/sql-monitor/ Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (12/43)

Third Party Free Tools: SQL Check v3.0 (SQL Server Monitoring Tools): http://www.idera.com/products/free-tools/sql-check/?s=bn120_mssqltips_chk SQL Job Manager: http://www.idera.com/products/free-tools/sql-job-manager/ SQL Permissions: http://www.idera.com/products/free-tools/sql-permissions/ http://www.spiceworks.com/free-sql-server-monitoring-tool/ SQL Server Native Tools and Resources Available for Monitoring: 1. SQL Server Profiler: Allows to trace server events 2. Windows Server Performance Monitor: Provides counters for SQL Server to track many different SQL server resources and activities 3. Log Files: a. SQL Server Logs: Troubleshoot SQL Server problems i. How to view: Management Studio>Expand Management>Logs>Double click log file to view b. SQL Server Agent Logs: Troubleshoot SQL Server Agent Related problems i. How to view: Management Studio>Expand SQL Server Agent>Error Logs> Double click log file to view c. Windows Event Logs: Troubleshoot system-wide problems. It includes SQL Server, SQL Server Agent problems, and other problems as well i. How to view: Administrative Tools>Double click Event Viewer>Expand Windows Logs>Click Application 4. Stored Procedures: a. sp_helpserver: Provides information about SQL Server instance configuration for Replication and Remote Access b. sp_helpdb: Provides information about Database c. sp_helpindex: Provides information about indexes on table and view d. sp_spaceused: Provides information about disk space used by a table, view, index etc in current database e. sp_monitor: Provides information about usage statistics like CPU usage, CPU idle time Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (13/43)

f. sp_who: List current users and processes g. sp_who2: List current users and processes (only SQL Server 2008) 5. DBCC: Check database integrity, trace activity, statistics 6. DMV (Dynamic Management View): a. Sys.dm_tran_locks: Provides information about object lock 7. Activity Monitor: Provides information on current users, processes, locks a. How to Monitor: Enterprise Manager>Right click Database Engine instance>select Activity Monitor (SQL Server 2000) Management Studio>Click on Tools>Options>General>AS Startup: Open Object Explorer and Activity Monitor (SQL Server 2005/2008) or 8. Job Activity Monitor: Provides detail information on status of jobs a. How to Monitor: Management Studio>Expand SQL Server Agent>Double click Job Activity Monitor 9. SNMP (Simple Network Management Tool), example, HP OpenView to monitor Server and database activity 10. SQL Server Built-in Functions: Function with Example Select @@connections as Total Login Select @@idle as Idle Time, getdate() as Since Select @@total_read as Reads, getdate() as Since Select @@total_write as Writes, getdate() as Since Select @@total_errors as Total Errors, getdate() as Since Select @@packet_sent as Packet Sent Description Returns total number of connections SQL Server Idle time in Milliseconds Number of disk reads by SQL Server Number of disk writes by SQL Server Number of disk read/write errors encountered by SQL Server Number of packets written to network by SQL Server Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (14/43)

Select @@packet_received as Packet Received Select @@packet_errors as Packet Errors Select @@io_busy as IO Time, getdate() as Since Select @@cpu_busy as CPU Busy, getdate() as since Number of packets read from network by SQL Server Number of network packet errors for SQL Server connections Returns I/O processing time in milliseconds by SQL Server Returns CPU processing time in milliseconds for SQL Server Activity How to Read Log files Error Messages: SQL Server Error Messages contains the following information: 1. Error Number: Uniquely identifies an error message a. Error Number 1 to 50,000: System Errors b. Error Number 50,001 - above: User defined errors 2. Severity Level: Indicates how critical the error is. Severity Level ranges from 1 to 25. a. 0 to 10: Informational purposes only b. 11 to 16: Generated by users c. 17 to 25: Hardware or Software errors. A DBA must look into these 3. Error State: Indicates source of the error. Error state number ranges from 1 to 127. It indicates the line number 4. Message: Provides brief description of the error. How to Tune up/optimize a Database: 1. Better Database Design: Creating a normalized or moderately de-normalized database, 2. Creating effective indexes: Create Index on column or columns where customers use to find a record 3. Using cursors only when absolutely necessary, and Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (15/43)

4. Using stored procedures instead of Transact-SQL batches 5. Upgrading the database hardware and relocating elements of the database, such as the underlying files or tables in a database 6. Having multiple CPU like 4, 8, 16 or 32 CPUS in your server would definitely improve database performance since multiple CPUs can read data simultaneously from multiple different disks 7. Properly placing tables across multiple physical discs/drives a. Filegroups and tables i. We can use multiple different physical disks to store tables like busy tables on one disk and less frequently used tables on other physical disks. For an example, Find the most frequently used tables and place them on different disks ii. We can use multiple data files on multiple filegroups How to Optimize a Query: 1. Analyze query before running to determine the most efficient method of execution (Result of this analysis is called execution plan) 2. By default, indexes are stored on the same filegroup as the table. It is better to put indexes (nonclustered indexes) on different files groups. Remember, clustered index must reside on the same filegroup as the table is being stored/placed 3. Replace Sub Query by Join if possible select st.store_name as Store, isnull( (select sum(bs.qty) from big_store as bs where bs.store_id=st.store_id),0 ) as "Books Sold" from store as st where st.store_id in ( select distinct store_id from big_sales ) group by st.store_name We can replace the above query by using join as: select st.store_name as Store, sum (bs.qty) as Books Sold from stores as st join big_store as bs on bs.store_id=st.store_id where st.store_id in (select distinct store_id from big_sales) group by st.store_name Now explain what this does in SQL server in back end: Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (16/43)

Subquery 1.SQL Server parse and compile time - CPU time = 28 ms - elapse time = 28 ms 2.SQL Server Execution Times: - CPU time = 145 ms - elapse time = 145 ms Join 1. SQL Server parse and compile time: - CPU time = 50 ms - Elapse time = 54 ms 2.SQL Server Execution Times: - CPU time = 109 ms - elapse time = 109 ms Now lets explain this situation more in details: For Sub-query: A. Table big_sales: Scan count 14, logical reads 1884, physical reads 0, read ahead reads 0 B. Table stores: Scan count 12, logical reads 24, physical reads 0, read ahead reads 0 For Join: A. Table big_sales: Scan count 14, logical reads 966, physical reads 0, read ahead reads 0 B. Table stores: Scan count 12, logical reads 24, physical reads 0, read ahead reads 0 4. Explicit joins give the optimizer more options to choose the order of tables and find the best possible plan Monitoring with SQL Server Profiler: In short SQL Profiler monitors server and database activity by logging/storing events in a file or table. Traces: Logged events are called traces Trace File: A trace to a file is called a trace file Trace Table: A trace to a table is called a trace table. After tracing events, you can replay traces in SQL Profiler in an instance on saved events. If a trace is too large, you can filter data to work with only subset of data/events. What can be monitored/done using SQL Profiler: SQL Profiler logs/store data in a SQL Server Database table or a file 1. Monitoring SQL Server Performance 2. Monitor (Analyzing and debugging) Queries/SQL Statements and Stored Procedures 3. Indentify Slow-Executing Queries 4. General Troubleshooting and Debugging in SQL Server 5. Tuning Indexes Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (17/43)

6. Stress Analysis/Load Analysis 7. Auditing Server and Database Activity/Auditing Security Activity a. Success or Failure of login attempt and Success or Failure of permission requests in accessing statements and objects How to create a SQL Profiler template 1. Click Start, point to Programs, point to Microsoft SQL Server, and then click Profiler. a. The SQL Profiler window appears 2. Click the File menu, point to New, and then click Trace Template a. The Trace Template Properties dialog box appears. 3. In the General tab, click Save As. a. The Save As window appears. 4. In the Filename text box, type SQLProfiler_Exercise1 and click Save a. The file path and filename appear in the General tab. 5. Click the Events tab. 6. In the Available Event Classes box, scroll down and select TSQL and then click Add a. All Transact-SQL event classes are added to the Selected Event Classes box. 7. Click the Data Columns tab. 8. In the Unselected Data box, scroll down and select the TextData column and click Add. a. The TextData column appears in the Selected Data box. 9. Click the Up button so TextData appears first in the column list. 10. In the Selected Data box, click Groups. 11. In the Unselected Data box, click CPU and then click Add. a. The CPU column appears under Groups in the Selected Data box 12. Click the Filters tab 13. In the Trace Event Criteria box, expand ApplicationName a. The Like and Not Like criteria appear 14. Expand the Like criteria and in the empty text box that appears, type Query a. Analyzer 15. Click Save 16. Leave SQL Profiler open to complete the next practice. Preparing SQL Profiler to run a trace: 1. On the SQL Profiler toolbar, click the New Trace icon. New Trace is the first icon on the toolbar a. The Connect to SQL Server dialog box appears 2. Verify that the Windows Authentication radio button is selected, and click OK a. The Trace Properties dialog box appears and the General tab has the focus. 3. In the Trace Name text box, type Trace01. 4. In the Template Name drop-down list box select SQLProfiler_Exercise1. 5. Click the Save To File check box Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (18/43)

a. The Save As window appears and Trace01 is the default filename 6. Click Save a. The trace file is saved to the My Documents folder and the Trace Properties dialog box reappears. Notice in the Trace Properties dialog box, that the maximum file size is set to 5 MB and that file rollover is enabled. The client processes the event data because the Server Processes SQL Server Trace Data is not checked 7. Click and review the settings displayed in the Events, Data Columns, and Filters tabs a. The settings in these tabs are identical to the template settings 8. Leave SQL Profiler open but do not click Run on the Trace Properties dialog box. Do some activity on SQL Server activity and run a trace to trace it 1. Open Query Analyzer, and connect to your local server or a server 2. In the Editor pane of the Query window, enter and execute the following code: USE datavoice IF EXISTS (SELECT name from dbo.sysobjects where name = table01 AND type = U ) DROP TABLE table01 CREATE TABLE table01 (uniqueid int IDENTITY, longcol02 char(300) DEFAULT This is the default value for this column, col03 char(1)) GO DECLARE @counter int SET @counter = 1 WHILE @counter <= 5000 BEGIN INSERT table01 (col03) VALUES ( a ) INSERT table01 (col03) VALUES ( b ) INSERT table01 (col03) VALUES ( c ) INSERT table01 (col03) VALUES ( d ) INSERT table01 (col03) VALUES ( e ) SET @counter = @counter + 1 END Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (19/43)

The first part of the code checks for a table named Table01 in the datavoice database. If a table with this name is found, it is dropped. Then, the table is recreated with three columns and the table is populated with 5000 rows of data. Inserting rows into the table will take a few moments. 3. In the Query Analyzer Editor pane of the Query window, enter but do not execute the following code: SELECT col03, longcol02 FROM table01 WHERE col03 = a SELECT uniqueid, longcol02 FROM table01 WHERE uniqueid = 10000 SELECT * FROM table01 WHERE uniqueid BETWEEN 5000 AND 10000 GO These Transact-SQL statements run queries against Table01. The SQL Profiler will trace the execution of this statement. Typically, you run SQL Profiler traces several times a day to gather a representative sample of database activity. 4. Switch to the SQL Profiler window that you left open in the previous practice. 5. In the Trace Properties dialog box, click Run. The two-paned trace window appears and four data columns appear in the top pane. (Something like this) 6. Switch to the Query Analyzer and run the SELECT statements entered in step 3 of this practice. 7. Switch to the SQL Profiler and watch as the trace captures the Transact-SQL activity. Trace data appears in the top pane of the trace window. 8. When a record containing SQL:BatchCompleted in the EventClass column appears, click the red square on the toolbar to stop the trace. An additional row is added to the top pane of the trace window, indicating that the trace stopped. Notice that the CPU data column appears only for SQL:Stmt-Completed and SQL:BatchCompleted event classes. The CPU data column is not available or relevant to the other event classes. Also notice that the event classes with CPU values are grouped together. How to analyze the trace data: 1. The statements are grouped by CPU time. The CPU time shows the amount of CPU time, in milliseconds, used by the event. 2. Click each of the rows containing a value in the CPU column. The text data for each Transact-SQL event appears in the bottom pane. Which statement in the batch required the most CPU time to execute? Which event required the most CPU time? Explain your answer. 3. Switch to Query Analyzer and insert the GO command between each SELECT statement. The code should now look like this: Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (20/43)

SELECT col03, longcol02 FROM table01 WHERE col03 = a GO SELECT uniqueid, longcol02 FROM table01 WHERE uniqueid = 10000 GO SELECT * FROM table01 WHERE uniqueid BETWEEN 5000 AND 10000 GO 4. Switch to the SQL Profiler and restart the trace. 5. Switch to the Query Analyzer and execute the code you modified in step 5 of this practice. 6. Switch back to SQL Profiler and examine how the positioning of the GO command changed the output of the trace. 7. When the query is finished, stop the trace. How does the trace output differ from the trace you created in the previous practice? 8. Close SQL Profiler and Query Analyzer. To see all error messages, execute the following, USE master GO SELECT * FROM sys.messages GO Available Native Tools and Resources for Monitoring & Performance Tuning: 1. SQL Server Profiler: Allows tracing server events. In a nutshell, Profiler provides the lowest common denominator of activity on a SQL Server instance. Profiler captures per session code with the ability to filter the data collection based on database, login, host name, application name, etc. in order to assess the IO, CPU usage, time needed, etc 2. Windows Server Performance Monitor: Provides counters for SQL Server to track many different SQL server resources and activities 3. Log Files: a. SQL Server Logs: Troubleshoot SQL Server problems i. How to view: Management Studio>Expand Management>Logs>Double click log file to view b. SQL Server Agent Logs: Troubleshoot SQL Server Agent Related problems Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (21/43)

i. How to view: Management Studio>Expand SQL Server Agent>Error Logs> Double click log file to view c. Windows Event Logs: Troubleshoot system-wide problems. It includes SQL Server, SQL Server Agent problems, and other problems as well i. How to view: Administrative Tools>Double click Event Viewer>Expand Windows Logs>Click Application 4. Stored Procedures: d. sp_helpserver: Provides information about SQL Server instance configuration for Replication and Remote Access e. sp_helpdb: Provides information about Database f. sp_helpindex: Provides information about indexes on table and view g. sp_spaceused: Provides information about disk space used by a table, view, index etc in current database h. sp_monitor: Provides information about usage statistics like CPU usage, CPU idle time i. sp_who: List current users and processes j. sp_who2: List current users and processes (only SQL Server 2008) 5. DBCC: Check database integrity, trace activity, statistics. See DBCC chapter in this book 6. DMV (Dynamic Management View): New to SQL Server 2005/2008, the Dynamic Management Views and Functions offer a real time view into the SQL Server sub systems. List of new DMVs available in SQL Server 2008 and 2005: sys.dm_db_file_space_usage - Database file usage to determine if databases are getting low on space and need immediate attention sys.dm_clr_loaded_assemblies - Assemblies in available in SQL Server sys.dm_exec_cached_plans - Cached query plans available to SQL Server sys.dm_exec_sessions - Sessions in SQL Server sys.dm_exec_connections - Connections to SQL Server sys.dm_db_index_usage_stats - Seeks, scans, lookups per index sys.dm_io_virtual_file_stats - IO statistics for databases and log files sys.dm_broker_connections - Service Broker connections to the network Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (22/43)

sys.dm_os_memory_objects - SQL Server memory usage sys.dm_tran_active_transactions - Transaction state for an instance of SQL Server Sys.dm_tran_locks: Provides information about object lock 7. Activity Monitor: Provides information on current users, processes, locks k. How to Monitor: Management Studio>Right click Database Engine instance>select Activity Monitor 8. Job Activity Monitor: Provides detail information on status of jobs How to Monitor: Management Studio>Expand SQL Server Agent>Double click Job Activity Monitor 9. Index Tuning Advisor: Available only on SQL Server 2000 and 2005. It has been deprecated in SQL Server 2008. It helps to find if a query requires an index or not. If query requires an index, it will identify the column and will also write SQL query to create the index. We can take the query and run it in Query Analyzer or it gives an option to implement the index directly from Index Tuning Advisor. 10. Database Tuning Advisor 11. SQL Server Management Studio Built in Performance Reports 12. System objects System objects such as sp_who, sp_who2, sp_lock, etc. provide a simple means to capture basic metrics related to locking, blocking, executing code, etc 13. Built-in Functions: Function with Example Select @@connections as Total Login Select @@idle as Idle Time, getdate() as Since Select @@total_read as Reads, getdate() as Since Select @@total_write as Writes, getdate() as Since Select @@total_errors as Total Errors, getdate() as Description Returns total number of connections SQL Server Idle time in Milliseconds Number of disk reads by SQL Server Number of disk writes by SQL Server Number of disk read/write errors Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (23/43)

Since Select @@packet_sent as Packet Sent Select @@packet_received as Packet Received Select @@packet_errors as Packet Errors Select @@io_busy as IO Time, getdate() as Since Select @@cpu_busy as CPU Busy, getdate() as since encountered by SQL Server Number of packets written to network by SQL Server Number of packets read from network by SQL Server Number of network packet errors for SQL Server connections Returns I/O processing time in milliseconds by SQL Server Returns CPU processing time in milliseconds for SQL Server Activity Performance monitor counters: Performance monitor counters play a vital role to understand how SQL Server is behaving at a macro level and to understand overall resources are being used within the Database Engine. Without capturing performance monitor counters data it is difficult to determine where the performance issues are occurring. Capturing the metrics has been traditionally from Performance Monitor either on an ad-hoc basis or setting up a log to capture the values on a predefined basis. In SQL Server 2000, the important system table is dbo.sysperfinfo table in the master database. Unfortunately, this table has been converted to a view with SQL Server 2005 and 2008 and dbo.sysperfinfo is only available for backward compatibility. Since this is the case, how can I capture the Performance Monitor values on an as needed basis with SQL Server 2005? In SQL Server 2005 and 2008, one of the new dynamic management objects is sys.dm_os_performance_counters. This DMV makes it possible to query a view directly to capture the SQL Server counters related to the instance. This offers DBAs a great deal of flexibility to capture the metrics in real time to find issues occurring. It is as simple as a SELECT statement to determine the SQL Server performance metrics. Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (24/43)

What is result set? A simple SELECT statement against the sys.dm_os_performance_counters view will result in a 5 column result set and 500+ counters. You can capture so many counters to determine issues with databases like on one instance in Data Group, we have close to 1000 counters are being captured and you can expect more with large numbers of databases and by installing all of the SQL Server application components. Explanation of the above result: Column Explanation object_name Counter category i.e. MSSQL + $ + InstanceName: + Databases (if you have an instance) NOTE - Depending on the applications and services installed, 20+ categories will be captured for the SQL Server instance. counter_name Counter name relative to the category, which may overlap between various object_name values. instance_name Instance of the counter which is either a database value or if it is a NULL value than the instance name is related to the overall SQL Server. cntr_value The captured or calculated value for the counter. cntr_type Counter type defined by Performance Monitor. Available counters: Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (25/43)

Depending on the services and applications installed the number of counters will vary, but expect to be able to query at least 500 counters. These counters range from memory usage to SQL Server application specific counters to include: MSSQL:CLR MSSQL:Access Methods MSSQL:User Settable MSSQL:Buffer Manager MSSQL:Broker Statistics MSSQL:SQL Errors MSSQL:Latches MSSQL:Buffer Partition MSSQL:SQL Statistics MSSQL:Locks MSSQL:Buffer Node MSSQL:Plan Cache MSSQL:Cursor Manager by Type MSSQL:Memory Manager MSSQL:General Statistics MSSQL:Databases MSSQL:Catalog Metadata MSSQL:Broker Activation MSSQL:Broker/DBM Transport MSSQL:Transactions MSSQL:Cursor Manager Total MSSQL:Exec Statistics MSSQL:Wait Statistics Limitations: The sys.dm_os_performance_counters is limited to SQL Server counters, so if you want system, physical disk, network interface card, etc. you need to run performance monitor to capture these counters Setting Counters for Performance Monitoring: To open the Performance monitor in WIN2K/WIN2K3/2008 follow the steps below: 1. Start > Programs > Administrative Tools > Performance NOTE: When viewing performance data in real time, you can view it as a report, a chart, or a histogram by clicking on the specific tab. Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (26/43)

To monitor SQL Server successfully, you must add the counters in the Performance monitor. To add counters: 1. Click the plus-sign button to open the dialog box 2. Select an object from the Performance Object list. 3. Choose either All Counters or Select Counters From List. If you opt to select individual counters, click the Explain button for a description of each one. You can also choose Select Instances from list. For example, if you added a PhysicalDisk counter, you could then Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (27/43)

select an instance of either C: or D: as shown in Figure below: 4. After you select the counter(s), click Add. You can then repeat the process for any additional objects you would like to use 5. Click Close when you have added all of your counters The most common counters are CPU activity, memory, paging, and/or disk I/O. You should monitor these counters. On most systems, you should also track the % Processor Time (under the Processor counters). On occasion, you will see spikes over 80 percent. This is normal unless the sustained % Processor Time is at 80 percent or higher for long periods. If that is the case, you could have a CPU bottleneck. To remedy the situation, you might have to get a fast processor, add more processors, and/or change disk configurations. In addition, I recommend to set counters for the following to monitor: Processor %Privileged Time: This is the amount of time the processor spent performing operating system processes. System Processor Queue Length: This equates to CPU activity Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (28/43)

SQL Server Buffer Cache Hit Ratio: This is the percentage of requests that reference a page in the buffer cache. You always want to have a ratio of 90 percent or more. If you have allocated as much memory as you can to SQL Server and have not met the 90 percent ratio, add more physical memory SQL Server: General Statistics User connections: This shows the number of users connected to the system Physical Disk %Disk Time: This is the amount of time a selected disk is busy Memory Pages/Sec: This is the rate at which pages are read from or written to disk, to resolve hard page faults. Monitor using SQL Server: To start logging information: 1. Expand Performance Logs And Alerts. 2. Highlight Counter Logs. 3. Right-click on Counter Logs and select New Log Settings, as shown in Figure below. 1. Enter a name and click OK. 2. In the General tab, add your counters, as shown in Figure below. Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (29/43)

3. Click the Log Files tab to set the specific log file information. This could include location and/or file size limit. 4. Click the Schedule tab to schedule your Performance monitoring. If you do not configure a time, the log file will continue to record information until you manually stop it. 5. Click OK. To analyze the data you have logged, you must open the log file and specify the appropriate attributes. To load your logged data: 1. Open System Monitor. Email: info@datagroupusa.com Web: DataGroupUSA.com/portal Pone: 703 986-9944 (30/43)