sql server best practice

Similar documents
SYSTEM SETUP FOR SPE PLATFORMS

Administração e Optimização de BDs

SQL Server Version. Supported for SC2012 RTM*** Not supported for SC2012 SP1*** SQL Server 2008 SP1, SP2, SP3

DMS Performance Tuning Guide for SQL Server

SQL Best Practices for SharePoint admins, the reluctant DBA. ITP324 Todd Klindt

Understanding and Controlling Transaction Logs

ImageNow for Microsoft SQL Server

Dynamics NAV/SQL Server Configuration Recommendations

Tuning Microsoft SQL Server for SharePoint. Daniel Glenn

Backup and Restore Back to Basics with SQL LiteSpeed

Administering Microsoft SQL Server 2012 Databases

Preparing a SQL Server for EmpowerID installation

VirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5

Best Practices for Running Siemens Teamcenter on SQL Server

How To Use A Microsoft Microsoft Database Server 2012

1 of 10 1/31/2014 4:08 PM

DBA 101: Best Practices All DBAs Should Follow

VMware vcenter 4.0 Database Performance for Microsoft SQL Server 2008

Getting to Know the SQL Server Management Studio

Moving the TRITON Reporting Databases

402: Taming SQL Server for Administrators. Todd Klindt & Shane Young SharePoint911

Transaction Log Internals and Troubleshooting. Andrey Zavadskiy

Keystone Enterprise Backup

Best Practices for Running Dassault Systèmes ENOVIA on SQL Server 2008

SAP Sybase Adaptive Server Enterprise Shrinking a Database for Storage Optimization 2013

Data Compression in Blackbaud CRM Databases

SAP Note FAQ: SAP HANA Database Backup & Recovery

VirtualCenter Database Maintenance VirtualCenter 2.0.x and Microsoft SQL Server

SQL Server 2005 Advanced settings

Managing Relativity SQL log files

SQL Server Database Administrator s Guide

Optimizing SQL Server 2012 for SharePoint SharePoint Saturday/Friday, Honolulu March 27, 2015

SQL Server Setup Guide for BusinessObjects Planning

How to overcome SQL Server maintenance challenges White Paper

Environment Optimization Guide

Database Maintenance Guide

Infor LN Performance, Tracing, and Tuning Guide for SQL Server

This article Includes:

Best Practices Every SQL Server DBA Must Know

SQL Server Transaction Log from A to Z

Whitepaper: performance of SqlBulkCopy

General DBA Best Practices

SQL Server Performance Tuning and Optimization

SharePoint 2013 Best Practices

Microsoft SQL Server Guide. Best Practices and Backup Procedures

How To Limit Volume In Bacula

Moving the Web Security Log Database

XenDesktop 7 Database Sizing

SQL Server Instance-Level Benchmarks with HammerDB

Understand Performance Monitoring

Table of Contents. CHAPTER 1 About This Guide CHAPTER 2 Introduction CHAPTER 3 Database Backup and Restoration... 15

Windows Defragmenter: Not Good Enough

NIMSOFT SLM DATABASE

The 5-minute SQL Server Health Check

WHITEPAPER. Making the most of SQL Backup Pro

MS SQL Performance (Tuning) Best Practices:

Managing Relativity SQL log files

NovaBACKUP. Storage Server. NovaStor / May 2011

Azure VM Performance Considerations Running SQL Server

IMPORTANT: The person who installs and sets up the PCS Axis database on the central database server

Microsoft SQL Server: MS Performance Tuning and Optimization Digital

Introduction. Part I: Finding Bottlenecks when Something s Wrong. Chapter 1: Performance Tuning 3

SQL Server 2008 Designing, Optimizing, and Maintaining a Database Session 1

Remote Network Accelerator

SQL Server Performance Tuning for DBAs

How To Backup An Rssql Database With A Backup And Maintenance Wizard

SAP HANA Backup and Recovery (Overview, SPS08)

Sage CRM Technical Specification

Monitoring SQL Server with Microsoft Operations Manager 2005

Protecting SQL Server Databases Software Pursuits, Inc.

SQL server maintenance when using active backup.

CHAPTER 1: INTRODUCTION TO THE COURSE

Deploying Affordable, High Performance Hybrid Flash Storage for Clustered SQL Server

Backups and Maintenance

Best Practices for Disk Based Backup

Best Practices. Best Practices for Installing and Configuring SQL Server 2005 on an LSI CTS2600 System

Microsoft SQL Server Staging

EZManage V4.0 Release Notes. Document revision 1.08 ( )

Capacity Planning for NightWatchman Management Center

Microsoft SQL Server 2005 Database Mirroring

Restoring Microsoft SQL Server 7 Master Databases

MTA Course: Windows Operating System Fundamentals Topic: Understand backup and recovery methods File name: 10753_WindowsOS_SA_6.

Installing SQL Express. For CribMaster 9.2 and Later

XenDesktop 5 Database Sizing and Mirroring Best Practices

6231B: Maintaining a Microsoft SQL Server 2008 R2 Database

PPC s SMART Practice Aids Prepare for Installing database upgrade to SQL Express 2008 R2

SQL Server Solutions GETTING STARTED WITH. SQL Safe Backup

Deploying and Optimizing SQL Server for Virtual Machines

MONITORING PERFORMANCE IN WINDOWS 7

Spotlight - SQL LiteSpeed Return on Investment

Price Comparison ProfitBricks / AWS EC2 M3 Instances

SOS SO S O n O lin n e lin e Bac Ba kup cku ck p u USER MANUAL

MS SQL Server 2014 New Features and Database Administration

ION EEM 3.8 Server Preparation

Support Document: Microsoft SQL Server - LiveVault 7.6X

Acronis Backup & Recovery Online Stand-alone. User Guide

RAID Storage System of Standalone NVR

Implementing Microsoft SQL Server 2008 Exercise Guide. Database by Design

3 Setting up Databases on a Microsoft SQL 7.0 Server

DEMYSTIFY TEMPDB PERFORMANCE AND MANAGEABILITY

Transcription:

sql server best practice 1 MB file growth SQL Server comes with a standard configuration which autogrows data files in databases in 1 MB increments. By incrementing in such small chunks, you risk ending up with a very fragmented data file on your disk. Consider a 200 GB database which has grown to its size by 200,000 auto grow events. In that case, you might end up with a data file in over 200,000 fragments spread all over your disk. To avoid getting a highly fragmented data file, we recommend that you always change this value on every new database you create, and also existing databases which still have the 1 MB auto grow configured. What should you use instead? There is no perfect answer that matches all cases, as it depends on your expected data volume in the database. If you create a database which you expect will only grow to a few megabytes, it would be a waste of disk space to grow in 1-2 GB increments. In that case, choose a smaller number like 32, 64, 128 or 256 MB. Keeping it neat is considered best practice. If you expect your newly created database to grow to hundreds of gigabytes, then you should use increments of 1-2 GB instead. When using SQL Server 2008R2 and older versions, avoid using exactly 4GB increments due to a known bug. Changing the default Every new database is created by cloning properties from the model database. You could change the default size and autogrow settings in the model database and then finish up. Well... almost! If you have changed the default autogrow setting in the model database and used the SQL Server Management Studio gui to create a new database, then it honors those new grow settings, and you don't need to worry too much about it. However, if you use the CREATE DATABASE statement, it will not honor the autogrow settings in the model. So if you are a T-SQL guy, be aware of this when creating databases. Autoshrink Enabled The cost of disk space is very high. SQL Server can help you: it can automatically shrink databases to remove unused space and by doing that you can save loads of money perfect right? NO! This option has way more drawbacks than advantages, so please do not enable autoshrink. Shrinking databases is the fastest way to achieve fragmentation. SQL Server goes to the last page in the database, moves it to the first free available space, and then repeats the process again. This shuffles the deck, putting your pages out of order. How to fix the problem: You can disable autoshrink with the following T-SQL command: - ALTER DATABASE mydatabase SET AUTO_SHRINK OFF Using the GUI: In the SQL Server Management Studio, right-click on the database and go into its properties to turn off autoshrink. This option is found on the options tab. File Growth in Percent SQL Server comes with a standard configuration which autogrows transaction log files in databases in 10% increments. The default initial size of the log file is 1 MB. Therefore, the first many growths will be relatively small chunks, but as the log file grows bigger, the incrementing chunks will also grow.

The picture below illustrates the development of the file size as well as grow size given an initial file size of 1 MB and 10% autogrowths: When your log file reaches 10 GB, the next growth will be 1 GB as shown by the red line. There we see the problem. When the transaction log file grows, SQL Server needs to zero out all the bytes, so when the file grows by 1 GB, the SQL Server needs to write 1 GB of zero to disk. This happens synchronously, so the transactions, which are happening at the time when the log file is full, will have to wait until this is complete. On modern servers, it usually does not take that long to write 1 GB to disk, but perhaps it is still long enough for your application to get a timeout. No need to say that the 10% autogrow of the log file will have more and more impact, the bigger the log file. How to fix this: Instead of using the 10% default autogrow, we recommend that you change it to a fixed MB size. This number should be small enough, so that your application will not suffer too much when the growth event happens, but also big enough to avoid massive file fragmentation. There are no right answers to all servers, but a number in the 128/256MB area is probably usable in most cases. Moreover, to avoid these zeroing out blockings when the log file grows, it is a good idea to set the initial size of the log file big enough for the expected usage.

No Log Backup When an SQL Server database is in Full Recovery mode or Bulk logged recovery mode, the SQL Server does not automatically free up space in the tractions log file when transactions are finished. It will keep the info in the log file, because it thinks that you are going to take a backup of the transaction log file. If there is no backup, the log file will keep growing until the disk is full. When this is the case, your database will roll over. How to fix the problem: Configure SQL Server transaction log backup - or Change the database to be in simple recovery mode. When this is done, the SQL Server will do circular logging, meaning that portions of the log is freed up when there are no open transactions, and SQL Server will go back to the beginning of the log and reuse space when it can. Remember, choosing this solution leaves you without transaction log backup which eliminates the possibility of point-in-time restores. Number of files in TempDB A default installation of SQL Server has one data file in TempDB. You do not need a very busy SQL Server before this may become a bottleneck. In such scenarios, there can be heavy IO activity in TempDB, and that can cause bottlenecks on the allocation pages (GAM, SGAM, PFS). TempDB is used for various things by the SQL Server, most obvious is temporary tables/objects, but it is also used when performing joins/sorting on large data sets which cannot fit in memory.to alleviate such bottlenecks, you simply need to add more data files to TempDB. How many data files you need in order to be perfectly optimized can depend on your specific setup. One data file per core in the server is recommended. So if you have a two socket server fitted with two quad core processors, you should configure TempDB with 8 data files and 1 log file. If those two quad core processors come with hyper threating enabled, you should go with 16 data files. How to fix it: The default data file is called tempdev with the physical file name tempdev.mdf. When adding more, you can simply call them tempdev02, tempdev03 etc. This is how you add one extra file with a size of 256 MB and an autogrow of 256MB: Repeat the query for each additional data file you need and simply change the NAME and FILENAME values along the way. If you are in doubt as to how many files you should add, you can use the following query to help you:

No Max Server Memory Configured By default, the SQL Server can use as much memory as it wants to and leave nothing to the operating system. If no memory is left for the operating system, it will start using the page file. The same goes for other services/processes running on the server. When the page file is used, you will experience slow performance, because instead of doing operations in memory, operations will use the page file. The page file is placed on the disk and because the memory is much faster than the disk, performance will suffer. Turn on advanced options please contact: Kasper Kamp Simonsen Business Intelligence Consultant & Partner M: +45 20 82 70 39 @: kks@inspari.dk