1 Top SQL Server Issues ABSTRACT: By Andy McDermid September 2014 In a complex database environment, keeping tabs on the health and stability of each system is critical to ensure data availability, accessibility, recoverability, and security. Through performing thousands of health-checks for clients, Datavail has identified the top issues affecting SQL Server performance. From misconfigured memory settings to missing backups, Datavail has gathered evidence from client health check history that identifies the most common issues DBA managers must correct for optimal database performance. Datavail s SQL Health Check is used not only as a diagnostic tool but also a road map of the work that needs to be performed. From there, routine healthchecks have proven to improve database performance. In this white paper, SQL Server Principal DBA for Datavail, Andy McDermid will share the top issues, the consequences of not taking action, and why consistent use of a SQL Server Health Check in conjunction with ongoing database management can lead to improved database environments and maximize the investment of time and resources.
2 Top SQL Server Issues
3 content Knowing When to Dig Deeper... Easy Picks... Server Configurations and Misconfigurations... Database Configurations and Misconfigurations... Policy Based Management... Best Bang... Other Server and Database Configurations and Misconfigurations... Baselines... The Long Road... Datavail's SQL Health Check... Conclusion... About Datavail
4 Knowing When to Dig Deeper Many DBAs come from backgrounds administering databases for a single company or environment, but remote DBAs work within many different environments for many diverse clients. This can be really exciting work. Over the course of, say, a single week a remote DBA like those we have here at Datavail, is exposed to a wide cross-section of distinct server applications and technologies and the associated challenges and opportunities that go along with it. Over the course of a few days we have the opportunity to interact with multiple SQL versions, touch on a variety of SQL technologies, administer the most simple to very complex installations, and at least dabble if not outright grapple with just about every aspect of SQL Server, from security, to tuning, to high-availability. And all this among a diverse variety of environments while interfacing with clients and teams of other DBAs and IT pros. There is a downside of course, as broad rivers tend to run shallow. One of the challenges of any DBA is knowing when and where to dive deeper into an environment or a specific issue to get a better understanding and help uncover solutions. One method we employ at Datavail to track our client's SQL Server systems is automated monitoring. A background process continually watches the SQL Server for metric values that breach preconfigured thresholds. Implementing a SQL Server monitoring solution is essential, but only half the picture since monitoring is generally reactive; a threshold is breached and an alert is fired, a DBA notified and the effort towards resolution begins. Also, automated monitoring cannot catch every issue. For example, consider a case where nightly backups execute flawlessly, but the backup file is retained for only a day or less. Automated monitoring might only notice the database was backed-up, but only a more careful look would reveal the shortcoming in the overall backup plan. So, keeping SQL Servers up to configuration, security, performance and maintenance, AKA health standards, is just as essential as monitoring. This is the proactive side of database administration to catch the inadequate backup retention or similar issues before they become an issue. Not only will it reduce the automated monitoring alerts and reactive responses, it allows the DBA (and other users and administrators) some peace of mind. This is important because once some of these foundational concerns are off the table, DBAs gain time to get engaged within the environment on a more technical level. The most common issues on this front are not always deeply technical, some of these topics and practices are just good housekeeping. For instance, it is not glamorous technical problem-solving to ensure some parameter like auto-shrink is off. But if a DBA knows it s off, and it s staying off, it frees up time to find out where they might dig into issues, think through scenarios, and suggest, propose, plan and implement solutions. At Datavail, the best tool we know when we need to do a deeper dive for a client to ensure everything is up to par, is our Health Check (HC). Our initial HC identifies what needs to be done to bring a SQL Server up to standards. Regular, recurring HCs keep a SQL Server up to standards and provides peace of mind that the SQL instance is stable and that everything looks good, just like last week. This paper is an inventory of some of the most common SQL Server issues exposed by first run and subsequent HCs that Datavail DBAs have found when we initially bring a server on board and under our administration. Hopefully this will provide a little insight into the implications of some of the basic issues, and give some guidance in getting a SQL instance up to good health and keeping it there. The paper is broken down into three sections. In each section we ll give some general background on a few SQL Server features and discuss when, why and how they might be misconfigured. We ll also identity their out-ofthe-box default configuration and discuss if that value is appropriate. If the default value is not appropriate, we ll discuss other, better options. The three sections are: Easy Picks Top issues that are easy to find and, usually, easy to fix. Not too surprisingly, some of the most frequent problems are the easiest to address, presumably since these are the most exposed and easiest knobs and dials to tweak and, in some cases, have misleading names that seem to promise easy performance improvements. Best Bang For The Buck These issues may require some evaluation and analysis. Not too much, but more than the easy picks. Because these configuration options, unlike the "set-itand-forget-it" options of the easy picks section, control how an SQL Server performs, these are tuning options that should be adjusted to optimize a SQL instance. Long Road These options may require some planning, a service outage to implement, or long term monitoring and further/ongoing adjustments. These configuration options require even more consideration in terms of pre-implementation planning and/or post-implementation review, and may or may not have significant impact on server performance and health. Page 1 Top SQL Server Issues 2014 Datavail, Inc. All rights reserved.
5 Easy Picks Server Configurations & Misconfigurations Priority Boost Everyone wants their SQL Server to go faster. That s understandable and there are ways to work towards that goal, but enabling priority boost is not one of those ways. Microsoft (MS) explains If you set this option to 1, SQL Server runs at a priority base of 13 in the Windows 2008 or Windows Server 2008 R2 scheduler. The default is 0, which is a priority base of 7. It s perhaps a hold-out from days past when it was not such an established best practice to dedicate an OS to an SQL instance. Since it s easily accessible via SSMS as well as sp_configure and also has such an intriguing name, misconfiguration may be the work of a well-meaning administrator. The trouble here is that the SQL OS already does a fine job of scheduling its own work and it gets along fine with the Windows scheduler, so there is really no good use case for this parameter. Apparently MS agrees since this configuration has been has been deprecated. If it is enabled, the risk is starving the OS of schedulers and thereby taking down the whole system. The default setting for priority boost is 0 and that is the best option. But do make note, changes to this do require a service restart. Recovery Interval Given a crash or unexpected service restart it s desirable that the time to get a database back online is as short as possible. Hypothetically, if every change to a database could be somehow immediately written to file, the recovery process could be, in effect, instant (i.e. there would be no need for recovery). Of course, that is not how computers work. But, besides being hypothetical, this kind of synchronous disk writing would be unacceptably slow. Instead SQL makes use of a buffer pool. Changes to the database are recorded in the buffer pool. Periodically a checkpoint process runs to write the current batch of buffer pool changes to disk. But what if those changes are lost due to a service interruption before the checkpoint? CPU Affinity Mask & IO Affinity Mask There are cases where CPU affinity mask and IO affinity mask can be put to good use. For CPU affinity the most common application is within a shared SQL instance environment to designate a subset of logical CPUs for each instance. It can be helpful to give each instance their own set of CPUs so they don t compete. Before enabling there are several factors to take into account. First consider the Logical CPU count. The affinity mask is a bitmask and the highest decimal value the sys.configuration table can hold is So, for servers with more than 32 logical processors there is a second configuration option, affinity64, to manage affinity on CPUs CPU affinity mask is deprecated as of SQL 2008r2 if CPU affinity is enabled. Similarly to CPU affinity mask, the IO affinity mask must be configured twice for a server with more than 64 logical CPUs. Typically it is used in high IO scenarios to dedicate a subset of the logical CPUs to IO. This set of CPUs should never overlap with the CPUs configured for CPU affinity if both options are used together. You ll find all the affinity configuration options via sp_configure (as well as SSMS), but it s usually best to leave these configuration options set to the default value of 0 unless there is a special case and the configuration has been well tested. Thanks to write-ahead logging, all the changes are captured in the log. During a restart (after a crash) those changes are applied to the database during the roll forward process. The time it takes to roll forward the log is, mostly, the recovery time and so this recovery time is directly related to the amount of changes recorded in the transaction log. SQL Server keeps track of the amount of information in the log and when it estimates the time to roll the log forward is one minute typically about 10MB it initiates a checkpoint. By default the sp_configure option recovery interval is set to 0 which indicates SQL is auto-tuning this time-to-roll-forward parameter to 1 minute. Increasing this setting will result in longer recovery times, but also less frequent checkpoints. Reducing checkpoint frequency can be beneficial in some high-write or slow-disk environments to space out I\O. However, in most cases the default setting is ideal. Database Configurations & Misconfigurations Page Verification In the days of SQL 2000, database page writes were verified with TORN PAGE DETECTION. For each 512 disk sector write, a bit in the page header was set. Upon read, if SQL discovered any inconsistences comparing the header bit to the actual page data it Top SQL Server Issues Page 2
6 indicated an incomplete write and an error was thrown (824). Since SQL 2005, a better page verify option is available; CHECKSUM. The old option was kept around for backward compatibility, but there is no reason not to use the better, more recent version, and it provides much better page corruption check. When CHECKSUM is enabled, upon write, SQL calculates a checksum value for the whole page and records that to the page header. Upon read, SQL calculates the page checksum again and compares it with the header checksum if the checksums do not match an error (824) is thrown. There is also the option to not do any page verification by setting this parameter to NONE, however the minimal CPU overhead required to calculate the checksums on read and write are surely worth the quick discovery of any corruption issue. The default setting for a new database on a recent version of SQL Server is CHECKSUM. Databases restored from earlier versions will carry the value of this setting with them and that is the typical reason a database would be found with a non-ideal page verify options. Auto-close\Auto-shrink You would not need to search the internet too long to find one of the many cautionary articles advising against turning on the auto-shrink database setting. If you don t know better, it can be a tempting solution to manage database growth and avoid out of disk errors. However, typically, at least in un-administered environments, it s more likely that transaction log bloat is the root Policy Based Management Once the values of the configuration options on a SQL instance are identified and set, how do you keep them that way? This can be an especially frustrating issue when more than one database administers manage a SQL instance. Policy Based Management (PBM) is a handy system that ships with all editions of SQL Server. Touted as a management tool for many SQL instances in enterprise environments, it also works well deployed at the local level to manage individual instances and ensure the instance configuration stays up to standards. To summarize, this feature allows a DBA to quickly evaluate a SQL instance and compare server and database configurations against preset compliance thresholds. Many of the health parameters discussed in this paper are good candidates for regular PBM evaluations. The evaluation results are easy to understand and act on, in some cases bringing the configuration back into line in a single operation. Auto-close works just as the name implies when the last connection ends, the database switches to a closed state. When a connection attempts to access the database it must again change its status to open to accept the connections. In most cases this is simply unnecessary overhead. Auto-close set to ON is the default value for the free editions of SQL Server, both SQL Express and MSDE. So, often when it is found misconfigured on a production server it is because the database was restored from one of these editions. Auto Create Statistics, Auto Update Statistics, Auto Update Statistics Asynchronously cause of full disk situations. Auto-shrink is not just a benign,misused configuration. When it is on it has some serious performance implications. First, since SQL attempts an auto-shrink operation approximately every 30 minutes, if the database grows again between shrinks, the database may enter a resource wasteful pattern of grow-shrink-grow and so on. This pattern consumes CPU and IO on every cycle. Furthermore, since any shrink operation moves pages from the front of the file to empty slots in the back of the file, there is no surer method to completely fragment database data. Page 3 For every index built, SQL creates a set of statistics based off the index key. When the query engine builds a query plan to access the data in the index, it refers to the statistics to see, for example; whether it should perform a seek operation or a scan operation. As the data in the index changes the row count increases or decreases or key values are updated the statistics too must be updated to reflect the new current state of the index, otherwise the query engine may make poor decisions and build sub optimal query plans. The Auto Create Stats\Auto Update Stats\Auto Update Stats Asynch database settings help control this behavior. Setting Auto Create Stats ON allows SQL to create the statistics as needed. Setting Auto Update Stats ON allows SQL Server to automatically update statistics when Top SQL Server Issues 2014 Datavail, Inc. All rights reserved.
7 approximately 20% of the rows have changed (this is a simplified explanation, there are other factors that figure in the Auto Update Stats operation). Lastly, setting Auto Update Stats Asynch ON allows SQL Server to delay the automatic statistics update until after the completion of the current transaction, which triggered the discovery of the out of date stats. More typical than failures due to network glitches are failures due to disk space issues. A database backup does not include unallocated file space so the backup is usually smaller than the sum of the data and log files. Look at the reserved result from sp_spaceused to determine the full backup size. Except for Auto Update Stats Asynch, these options default to ON and it is rare that they would need to be different. There is an argument that auto creating stats and auto updating stats has performance overhead. This may be a case where Auto Update Stats Asynch should be turned on ( Auto Update Stats Asynch is only an option if Auto Update Stats is on.) If it is determined that Auto Create and Auto Update should be turned off, then manually created stats and a manual process to update them should be implemented. If Auto Update Stats is off, even a manual stats update will not invalidate plans in the plan cache potentially resulting in out of date, subpar plans. Missing/Misconfigured Backups Regular and reliable database backups are the foundation of database administration. Planning and implementing a backup plan can be intense identifying the correct backup type, schedule and frequency, retention days, and target can be complicated. However, once properly configured, a good plan should run consistently and smoothly. In fact there are only a few issues that commonly result in missing or dated backups, but these basic issues can sometimes be challenging to overcome. In some cases, when the backup is written directly to a remote disk, an inconsistent connection might result in failed backups. The larger the database and slower the network, the more likely even a single glitch can cause this kind of failure. Regardless of the state of the network, efficiency is the best medicine here. Here are a few methods to help reduce backup durations: Evaluate and optimize the entire backup job remove any extra steps that are not specifically for backing up the database. For example, put off backup verification until later. Compress backups compression happens before the backup is written to disk so there will be less network traffic. Use the BUFFERCOUNT argument of the BACKUP statement similar in practice to striped files, SQLCAT has good information on these options. Stripe the backup into 4 files 4 files is the ideal number to maximize network through-put. Of course reducing the backups and backup job duration has a number of benefits besides reducing the exposure to network issues - there is no downside to making them faster so apply these suggestions freely. However the target disk will need at least that much free space, and that s just to keep a single full back-up on hand. There are many varieties of systems out there with individual requirements, but as a general rule, best backup practice for an OLTP production system dictates multiple days of backups be retained, including transaction log backups and possibly differential backups. Disk capacity for the backup drive in the backup plan should take this into account and allow plenty of headroom to ensure no backup failures are due to lack of free disk space. Other reasons for missing or dated backups include the lack of any backup jobs, failing third party backup tools which may not be capturing backups, and the misunderstanding that native SQL backups can be replaced by SAN snapshot technology or VM backups, which is not always true depending on RPO and RTO. And sometimes, simply, the necessity for database backups is overlooked. In that case there can be a real threat to production above and beyond disaster recovery concerns since, if the database is using the full recovery model, the un-backed-up transaction log will continue to grow until it fills the disk and user transactions begin failing with a 9002 error. In any of these cases, the resolution involves determining the correct recovery model and creating an ideal backup plan for each database. Or, at the very least, a simple SQL Maintenance Plan can be hastily configured as a stop gap to get database backups in place until a more detailed plan can be implemented. Are there any default values to consider regarding database backups? In fact a new database in full recovery mode will operate as if in simple mode until its initial backup. Until that first backup the database does not risk a full transaction log since log backups are not required for a database in simple recovery mode. So one could say a new database defaults to simple recovery mode. Top SQL Server Issues Page 4
8 Best Bang Other Server and Database Configurations and Misconfigurations In a similar camp as the easy-pickings, server and database configurations reviewed in the section above, there are a few other database and server parameters commonly misconfigured. Unlike the one-size-fits-all, setit-and-forget-it philosophy applied to those parameters, the ideal configurations for the setting in this section are OS, SQL instance, and/or database specific, and have measurable effects on performance. They may require regular review over a period of time. In other words, these parameters are used to tune an instance or database for stability and performance. Server Minimum and Maximum Memory Out-of-the-box, a new installation s Minimum Memory is configured as 0 and the Maximum Memory is configured to Microsoft considers this default configuration to be dynamic allowing SQL Server to acquire memory as needed and release it when it is in demand by other software or the OS. This dynamic management may be fine for non-critical implementations, but for production SQL instances these values are not ideal. Since the Maximum Memory value controls the size of the buffer pool where the data pages are kept, it s important to allow SQL Server as much of the available RAM as possible so that disk IO is minimized. However, it is also important to cap SQL s Maximum Memory to allow a reserve for the OS and any other applications sharing the server. SQL allocates memory as needed up to the Maximum Memory value, so if the demands of the instance have not yet required the full amount of memory available, the Task Manager value will be low. On 64 bit installations, if the lock pages in memory feature is enabled, the Task Manager value will not report any buffer pool memory (it will only show multi-page allocations). In either case, a better set of metrics to observe to get a sense of a SQL instance memory usage are the perfmon counters Total Server Memory (KB) and Target Server Memory (KB). Max Degree of Parallelism and Cost Threshold for Parallelism According to results in Figure X, many SQL instances experience CTXPACKET waits. When a query uses parallelism it recruits additional threads to speed up the process when, for example, it needs to do a large table scan. This wait type accrues when a task waits for the parallel aspects of a query plan to synchronize and return results. The common reaction to observed high CTXPACKET waits is to reduce the server s Max Degree of Parallelism setting (MDOP). The default value for this sp_configure option is 0, meaning queries can go parallel to as many degrees as there are logical CPUs. An excellent rule of thumb proposed by Microsoft SQL Server MVP Jonathan Kehayias for figuring the amount of RAM to reserve for the OS goes as follows: 1 GB of RAM for the OS, 1 GB for each 4 GB of RAM installed from 4 16 GB 1 GB for every 8 GB RAM installed above 16 GB RAM The remaining RAM can be assigned to SQL via the Maximum Memory setting. (This suggestion assumes a stand-alone dedicated instance of SQL Server in other situations more RAM should be reserved.) On the other end of the spectrum, it is certainly not preferable to allow any other process to steal memory from SQL so the Minimum Memory value of 0 should be reconfigured as well. One option is to set it equal to the Maximum Memory value thereby effectively making the buffer pool s RAM usage static. In some cases while observing memory usage on a SQL Server a discrepancy between SQL s Maximum Memory value as configured and the value of the Memory field for sqlservr.exe on the process tab of the Windows Task Manager. This can be for 2 reasons: Forcing queries to not go parallel or only go paralleled to a limited degree indeed will reduce this wait type, but at the expense of operations that may benefit from parallelism. A better configuration to control parallelism is the Cost Threshold for Parallelism discussed in the next paragraph. As for MDOP, Microsoft recommends a degree of parallelism per logical CPU per NUMA node. So for a 16 CPU server with 4 NUMA nodes the suggestions is to set MDOP to 4. Working hand in hand with MDOP, Cost Threshold for Parallelism (CTFP) controls when a query becomes eligible for parallelism (of any degree). Page 5 Top SQL Server Issues 2014 Datavail, Inc. All rights reserved.
9 The default value is 5, meaning any query with a cost greater than 5 can be considered for parallelism. Ultimately, it s up to the SQL query optimizer to decide on a parallel versus a serial plan and, if parallel, to what degree. But the default values for the MDOP and CTFP parameters, 0 and 5 respectively, leave some potential for inefficiency if relatively low cost queries do go parallel. Both options should be carefully considered and adjusted for maximum throughput. Contrary to common understanding, CTFP is not a measure of seconds. The cost is the query cost, the same as seen in a query plan, and, according to Microsoft, this cost value refers to an estimated elapsed time in seconds required to run the serial plan on a specific hardware configuration. Database Duplicate, Overlapping, and Unused Indexes Index maintenance is high on the list of a DBAs responsibilities. One aspect of this is keeping indexes efficient by aiming for a high use count per used-disk space ratio; the more the indexes are used for a given size of indexes on disk, the more efficient the overall index layout. Three factors are the main contributors to a low ratio, but they also may have other undesirable performance impacts: Duplicate Indexes obviously a waste of disk space, duplicate indexes also drag down performance since both indexes must be maintained during inserts, updates, deletes, rebuilds, etc. A duplicate index can usually be dropped without much concern. Overlapping Indexes similar to duplicates, overlaps waste space and incur performance penalties, though a careful evaluation should be performed to determine the correct index to keep. Unused Indexes these also take unjustified space on disk and incur needless IO, however, in some cases what looks like an unused index now may be required for some regular but infrequent query. One option here is to disable the index until it is needed and therefor reduce the impact of ongoing maintenance. Stats out of date Closely tied to indexes from a maintenance standpoint as well as a submitted query s point of view - are statistics. As described above in the Auto Create Statistics section, if the statistics are not maintained and up to date, queries may not even consider the carefully designed and maintained indexes. Baselines How do you know if any of these configurations are helping or hurting SQL Server? You don t. Not without some measurable metrics. Consider a twist nozzle on an ordinary garden hose: twist the barrel a little and you get a gentle mist, twist a few turns to get a forceful jet. Like SQL Server, it may take some adjusting to get it just right. The nozzles throughput is noticeable through the visual, tactile and audible feedback as you twist. SQL throughput must be measured via a set of baseline metrics running in the background. This baseline should be reviewed after each significant change in configurations. If you can prove the change resulted in even a small improvement, things are moving in the right direction. An easy baseline solution is to create a counter set in Windows Performance Monitor. Ensuring the database configuration option Auto Update Statistics is a good start. Statistics are updated automatically based on an evaluation of these thresholds: The table size has gone from 0 to >0 rows (test 2). The number of rows in the table when the statistics were gathered was 500 or less, and the colmodctr of the leading column of the statistics object has changed by more than 500 since then (test 2). The table had more than 500 rows when the statistics were gathered, and the colmodctr of the leading column of the statistics object has changed by more than % of the number of rows in the table when the statistics were gathered (test 3). We can generalize some details here and say for tables of 500+ rows, statistics get updated when the number of changes is ~20% of the row count. For larger tables where 20% of the number of rows in the table may be a very large number, the auto update stats feature may fall short of initiating needed updates. This is one reason it is important to include a manual process to update statistics as part of a general maintenance plan. Top SQL Server Issues Page 6
10 The Long Road File Locations System and User DB files on Same Disk, Data and Log Files on Same Disk Having system and user database file on the same disk is a disaster recovery concern. If the single disk goes down, all is potentially lost. If the files are on separate disk at least the situation is isolated. when a log file truncates or shrinks, these operations also work on the file in units of VLFs. Additionally, during any database recovery operation (including cluster or mirror failovers, service restarts and database restores) each VLF is examined to determine which have active transactions to roll forward. As you can imagine, an excessive number of VLFs can result in unnecessary overhead and delays in recovery time. Small values for transaction log auto-grow are the typical reason for too many VLFs. Similarly, it is considered bad practice to have log and data files on the same disk since their IO profile is very different. Data files typically experience random IO, whereas log files are written sequentially (with several exceptions like replication, which reads the transaction log). These guidelines mostly apply to physical DAS storage. When the data and log files are stored on a SAN the IO profile concerns are less valid since there may be no clear separation between logical drives on the physical level of the SAN. Even so, just from an organizational standpoint, there is good sense in separating these file types. Autogrow as Percent or Very Low Default values for database size and growth are typically not the best choice, however oftentimes these initial configurations persist well after the database has grown into full production mode. Here is a look at default database data file and log file size and growth settings. Consider the data file s default auto-growth setting of 1 MB. Given an active database, this can result in more-or-less continual growth commands being issued for the file. In some circles the theory is that continual small growths will have less impact than infrequent large growths, but in fact either case can be avoided by appropriate sizing and, as needed, scheduled growths during off business hours. Likewise, the log file s default setting is inappropriate. If, for example, the log file were to grow to 100GB, the next auto-growth would be 10GB, if it is 500GB, that s a 50GB growth. The larger the file, the larger the one time growth. When the file does need to grow, those large growths can take some time and when the log file grows everything is blocked. Like the data file, the need for a larger log file should be anticipated and the growth operation scheduled during off business hours. Instant file initialization allows data files to grow very quickly, however this option is not available for database log files all the more reason to carefully manage log file growth. Virtual Log Files Misconfigured auto-growth settings may also result in excessive virtual log files (VLFs) in the transaction log. VLFs are logical management units within transaction log files when a log file grows, it grows in VLF blocks, and Page 7 The general rule of thumb here is that auto-grow setting should be configured to generate VLFs of about 512MB 1GB. Of course, as mentioned above, the better solution is not to rely on auto-growth and manually size the transaction. SQL s internal process to create VLFs follow these guidelines: Growths of less than 64MB and up to 64MB generate 4 VLFs Growths larger than 64MB and up to 1GB generate 8 VLFs Growths larger than 1GB generate 16 VLFs Using these algorithms we can control a log file s growth to achieve something close to the ideal 512MB 1GB VLF size. Often the resizing process is part of a larger project to reduce VLF counts. The following steps outline the whole effort: Before you shrink the log file take note of the current size in MB assuming this is the correct size or whatever your new size is Shrink, backup, and repeat as necessary to minimize VLFs. This works best in a time of quiet so the log can clear easily. If there are many VLFs, quite a few backup/shrink cyles may be required between shrinks since the shrink clears only unused VLFs. Divide 8000MB into the original size, round up, grow the log back that many times in multiples of 8000MB (use 8000MB because there is a bug in 4000MB growth increments). Top SQL Server Issues 2014 Datavail, Inc. All rights reserved.
11 Since each log file grow over 1GB creates 16VLFs, the 8000MB grow results in approximate ideal VLF size of 512MB. For very large transaction log files say over 240GB - the growth statements can be issues in multiples of 16000MB to create approximately 1GB VLFs. For smaller size logs say less than 4GB a single growth statement is usually fine. TempDB Missing Indexes The missing index DMVs may be one of the more underused features in SQL Server. Available since version 2005, these DMVs (sys.dm_db_missing_index_group_stats, sys.dm_db_missing_index_groups, sys.dm_db_missing_ index_details and sys.dm_db_missing_index_columns) allow a DBA to discover what indexes the SQL query engine would have liked to have used were they available. How many files should TempDB have? There are a number of recommendations, but the bottom line is as many as it takes to eliminate page allocation contention. Every time that tempdb is used, the thread accessing tempdb must first find enough free space to build the temp object. It determines where it can locate this space by reading special pages GAM, SGAM and PFS which are evenly spaced throughout a database file. These pages serve as a kind of index to unused and available disk space. Every database has these pages, but tempdb is a special case since it is a shared resource which is available to many users. Herein lies the trouble; a bottleneck can develop when many threads need tempdb space and so require latches on these special pages. Each time a query is executed, the query engine records the ideal index for that query. If that index exists, it is used, but if it does not exist, SQL stores the information and this missing index info can be reviewed later for possible index implementation. The resolution is simply to distribute the load by adding more tempdb files. If the need for multiple tempdb files is evident, the current suggestion is 1 file per 1 logical CPU up to 8 logical CPUs, then, if contentions still exists, add 4 files at a time (up to the number of logical cpus). All files should be the same size because this will allow SQL s proportional fill algorithm to evenly allocate space in the files. Proceed carefully. If there are too many files this can be a performance problem as well since sort memory spills may spend extra time doing round robin allocations among many tempdb file. A missing index report can be created by joining the DMVs to include equality, inequality and include columns as well as a potential improvement metric based on the query cost and execution count. These missing index reports do have some limitations and consensus indicates DBAs should take care not to produce needlessly large or duplicate indexes while following the missing index suggestions. However the query engine knows what it needs, so regardless of a few known limitations it s a good practice to evaluate each suggested missing index. One of the caveats in creating indexes based on the missing index DMV is that suggested equality columns may not be ordered ideally in the sys.dm_db_missing_ index_details. The correct order puts the most selective columns first and column selectivity can be calculated as count-of-distinct-values/count-of-rows-total. Of note, trace flag 1118 forces all allocations, system wide, to be in extents (not pages) and so is an option to specifically reduce contention on SGAM pages, including tempdb SGAM pages. Top SQL Server Issues Page 8
12 Datavail's Health Check SQL Server is complicated. Considering the many configuration options and potentials for misconfigurations (some but not all - of which are examined above), it should be apparent that more than just automated monitoring is required to keep a SQL Server instance up to standards of health. The tool Datavail uses to keep SQL Servers up to standards is our Health Check. Some of the key features are: Comprehensive the Health Check examines all of the configurations and settings discussed above for health-standards compliance as well as many more, including critical parameters of the OS and SQL instance (see additional documentation for complete list). The report is organized by categories which include Performance, Availability and Recoverability, Configuration, Maintenance, Security and others. Each category summarizes the count of in versus out of compliance health parameters and allows the user to drill down to get a more detailed view of the particular configuration. A summary leads the report to give a quick look at the general state of the target SQL Server health. Configurable not every SQL instance has the same template of preferred configurations. The Health Check allows customization of preferred configuration setting for each instance. Actionable the Health Check serves as both a discovery tool and a management tool. Initial runs may highlight area that require improvement to meet health standards. Once a SQL instance is up to standards, repeated executions of the Health Check ensure the SQL instance stays up to health standards. The Heath Check process can be automated and scheduled regularly to generate a historical picture of SQL instance health over time. Since the results are archived at a central location, and rendered via SQL Server Reporting Service, the reports are easy to view, distribute and act upon. Modular the Health Check allows any combination of categories of health parameters to be included or excluded from a report. An example of use is a weekly backup report that only examines the Availability and Recoverability. Conclusion There are many intricacies to keeping SQL Servers healthy, let alone completing everyday tasks. DBAs have a lot of responsibility to keeping the growing amount of data safe and clean. We ve found that clients who regularly monitor and conduct health checks see their database operating environment improve over time. If you are like many DBAs there just isn t enough time in the day. If you are interested in learning more about our health checks don t hesitate to get in touch by visiting our website, You may be glad you did. Page 9 Top SQL Server Issues 2014 Datavail, Inc. All rights reserved.
Upgrading to SQL Server 2012 and Beyond ABSTRACT: By Andy McDermid If you re still running an older version of SQL Server, now is the time to upgrade. SQL Server 2014 offers several useful new features
Extending Your Use of Extended Events An Introduction to Replacing SQL Profiler with Extended Events ABSTRACT: By Andy McDermid & Sivakumar Thangavelu September 2014 The much-used Microsoft SQL Profiler
Response Time Analysis A Pragmatic Approach for Tuning and Optimizing SQL Server Performance By Dean Richards Confio Software 4772 Walnut Street, Suite 100 Boulder, CO 80301 866.CONFIO.1 www.confio.com
Administering Microsoft SQL Server 2012 Databases Install and Configure (19%) Plan installation. May include but not limited to: evaluate installation requirements; design the installation of SQL Server
The World s Largest Community of SQL Server Professionals DBA 101: Best Practices All DBAs Should Follow Brad M. McGehee Microsoft SQL Server MVP Director of DBA Education Red Gate Software www.bradmcgehee.com
Response Time Analysis A Pragmatic Approach for Tuning and Optimizing Oracle Database Performance By Dean Richards Confio Software, a member of the SolarWinds family 4772 Walnut Street, Suite 100 Boulder,
Database Maintenance Essentials Brad M McGehee Director of DBA Education Red Gate Software What We Are Going to Learn Today 1. Managing MDF Files 2. Managing LDF Files 3. Managing Indexes 4. Maintaining
Leveraging MySQL Features for Availability & Scalability ABSTRACT: By Srinivasa Krishna Mamillapalli MySQL is a popular, open-source Relational Database Management System (RDBMS) designed to run on almost
sql server best practice 1 MB file growth SQL Server comes with a standard configuration which autogrows data files in databases in 1 MB increments. By incrementing in such small chunks, you risk ending
THE ESSENTIAL GUIDE TO Database Monitoring By Michael Otey SPONSORED BY One of the database administrators (DBAs) most important jobs is to keep the database running smoothly, which includes quickly troubleshooting
Microsoft SQL Server OLTP Best Practice The document Introduction to Transactional (OLTP) Load Testing for all Databases provides a general overview on the HammerDB OLTP workload and the document Microsoft
Enhancing SQL Server Performance Bradley Ball, Jason Strate and Roger Wolter In the ever-evolving data world, improving database performance is a constant challenge for administrators. End user satisfaction
Response Time Analysis A Pragmatic Approach for Tuning and Optimizing Database Performance By Dean Richards Confio Software 4772 Walnut Street, Suite 100 Boulder, CO 80301 866.CONFIO.1 www.confio.com Introduction
MS SQL Server 2012 Database Administration With AlwaysOn & Clustering Techniques Module 1: SQL Server Architecture Introduction to SQL Server 2012 Overview on RDBMS and Beyond Relational Big picture of
Microsoft SQL Server Guide Best Practices and Backup Procedures Constellation HomeBuilder Systems Inc. This document is copyrighted and all rights are reserved. This document may not, in whole or in part,
Dynamics NAV/SQL Server Configuration Recommendations This document describes SQL Server configuration recommendations that were gathered from field experience with Microsoft Dynamics NAV and SQL Server.
Performance Study VirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5 VMware VirtualCenter uses a database to store metadata on the state of a VMware Infrastructure environment.
WRITTEN BY Greg Robidoux Top SQL Server Backup Mistakes and How to Avoid Them INTRODUCTION Backing up SQL Server databases is one of the most important tasks DBAs perform in their SQL Server environments
SQL Server Database Administrator s Guide Copyright 2011 Sophos Limited. All rights reserved. No part of this publication may be reproduced, stored in retrieval system, or transmitted, in any form or by
SQL Server 2008 Administration in Action ROD COLLEDGE 11 MANNING Greenwich (74 w. long.) contents foreword xiv preface xvii acknowledgments xix about this book xx about the cover illustration about the
SQL Server 2008 Designing, Optimizing, and Maintaining a Database Course The SQL Server 2008 Designing, Optimizing, and Maintaining a Database course will help you prepare for 70-450 exam from Microsoft.
Backups and Maintenance Backups and Maintenance Objectives Learn how to create a backup strategy to suit your needs. Learn how to back up a database. Learn how to restore from a backup. Use the Database
Case Study: Load Testing and Tuning to Improve SharePoint Website Performance Abstract: Initial load tests revealed that the capacity of a customized Microsoft Office SharePoint Server (MOSS) website cluster
Performance Study VMware vcenter 4.0 Database Performance for Microsoft SQL Server 2008 VMware vsphere 4.0 VMware vcenter Server uses a database to store metadata on the state of a VMware vsphere environment.
MS SQL Server 2014 New Features and Database Administration MS SQL Server 2014 Architecture Database Files and Transaction Log SQL Native Client System Databases Schemas Synonyms Dynamic Management Objects
Backup and Recovery What Backup, Recovery, and Disaster Recovery Mean to Your SQL Anywhere Databases CONTENTS Introduction 3 Terminology and concepts 3 Database files that make up a database 3 Client-side
Chapter 1: Configuring In This Chapter configuration tools Adjusting server parameters Generating configuration scripts With each new release of, Microsoft continues to improve and simplify the daily tasks
Media Partners SQL Server Transaction Log from A to Z Paweł Potasiński Product Manager Data Insights email@example.com http://blogs.technet.com/b/sqlblog_pl/ Why About Transaction Log (Again)? http://zine.net.pl/blogs/sqlgeek/archive/2008/07/25/pl-m-j-log-jest-za-du-y.aspx
Optimizing Your Database Performance the Easy Way by Diane Beeler, Consulting Product Marketing Manager, BMC Software and Igy Rodriguez, Technical Product Manager, BMC Software Customers and managers of
The Database is Slow SQL Server Performance Tuning Starter Kit Calgary PASS Chapter, 19 August 2015 Randolph West, Born SQL Email: firstname.lastname@example.org Twitter: @rabryst Basic Internals Data File Transaction Log
WHITE PAPER Analyzing IBM i Performance Metrics The IBM i operating system is very good at supplying system administrators with built-in tools for security, database management, auditing, and journaling.
WHITE PAPER SQL Server Performance Intelligence MARCH 2009 Confio Software www.confio.com +1-303-938-8282 By: Consortio Services & Confio Software Performance Intelligence is Confio Software s method of
T E C H N I C A L B R I E F The Benefits of Virtualizing Aciduisismodo Microsoft SQL Dolore Server Eolore in Dionseq Hitachi Storage Uatummy Environments Odolorem Vel Leveraging Microsoft Hyper-V By Heidi
Overview What is so cool about the SQL diagnostic manager Management Pack? The SQL diagnostic manager (SQLdm) Management Pack integrates key monitors and alerts used by SQL Server DBAs with Microsoft's
Best Practices for Monitoring Databases on VMware Dean Richards Senior DBA, Confio Software 1 Who Am I? 20+ Years in Oracle & SQL Server DBA and Developer Worked for Oracle Consulting Specialize in Performance
6 Ways to Shore Up Your Security ABSTRACT: By Trish Crespo February 04 Microsoft's SharePoint collaboration software is an excellent tool for enterprise users, but some individuals have pointed to it as
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 4, Issue. 6, June 2015, pg.381
The Complete Performance Solution for Microsoft SQL Server Powerful SSAS Performance Dashboard Innovative Workload and Bottleneck Profiling Capture of all Heavy MDX, XMLA and DMX Aggregation, Partition,
RESPONSE TIME ANALYSIS A Pragmatic Approach for Tuning and Optimizing Database Performance INTRODUCTION For database administrators the most essential performance question is: how well is my database running?
Sawmill Log Analyzer Best Practices!! Page 1 of 6 Sawmill Log Analyzer Best Practices! Sawmill Log Analyzer Best Practices!! Page 2 of 6 This document describes best practices for the Sawmill universal
General DBA Best Practices An Accelerated Technology Laboratories, Inc. White Paper 496 Holly Grove School Road West End, NC 27376 1 (800) 565-LIMS (5467) / 1 (910) 673-8165 1 (910) 673-8166 (FAX) E-mail:
Is Your Head in the Cloud When It Comes To Database Management? Maybe It Should Be ABSTRACT: By Mark Perlstein Cloud-based database management provides organizations with database expertise when they are
DEMYSTIFY TEMPDB PERFORMANCE AND MANAGEABILITY BY ROBERT L DAVIS Applies to: SQL Server 2008, SQL Server 2008 R2 SUMMARY This whitepaper provides clear guidance on best practices for managing tempdb to
ASPE IT Training SQL Server Performance Tuning for DBAs A WHITE PAPER PREPARED FOR ASPE BY TOM CARPENTER www.aspe-it.com toll-free: 877-800-5221 SQL Server Performance Tuning for DBAs DBAs are often tasked
Outline Database Management and Tuning Johann Gamper Free University of Bozen-Bolzano Faculty of Computer Science IDSE Unit 11 1 2 Conclusion Acknowledgements: The slides are provided by Nikolaus Augsten
MS SQL Performance (Tuning) Best Practices: 1. Don t share the SQL server hardware with other services If other workloads are running on the same server where SQL Server is running, memory and other hardware
Course Description This course is a soup-to-nuts course that will teach you everything you need to configure a server, maintain a SQL Server disaster recovery plan, and how to design and manage a secure
Module 14: Scalability and High Availability Overview Key high availability features available in Oracle and SQL Server Key scalability features available in Oracle and SQL Server High Availability High
Performance Monitoring with Dynamic Management Views Introduction The primary responsibility of a DBA is to ensure the availability and optimal performance of database systems. Admittedly, there are ancillary
coursemonster.com/us Microsoft SQL Server: MS-10980 Performance Tuning and Optimization Digital View training dates» Overview This course is designed to give the right amount of Internals knowledge and
SCHOONER WHITE PAPER Top 10 Reasons why MySQL Experts Switch to SchoonerSQL - Solving the common problems users face with MySQL About Schooner Information Technology Schooner Information Technology provides
WhatsUp Gold v11 Features Overview This guide provides an overview of the core functionality of WhatsUp Gold v11, and introduces interesting features and processes that help users maximize productivity
The 5-minute SQL Server Health Check Christian Bolton Technical Director, Coeo Ltd. Kevin Kline Technical Strategy Manager, Quest Software 2009 Quest Software, Inc. ALL RIGHTS RESERVED Agenda Introducing
Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card Version 1.0 April 2011 DB15-000761-00 Revision History Version and Date Version 1.0, April 2011 Initial
NOVASTOR WHITE PAPER Wishful Thinking vs. Reality in Regards to Virtual Backup and Restore Environments Best practices for backing up virtual environments Published by NovaStor Table of Contents Why choose
Best Practices for Architecting Storage in Virtualized Environments Leverage Advances in Storage Technology to Accelerate Performance, Simplify Management, and Save Money in Your Virtual Server Environment
Infor LN Performance, Tracing, and Tuning Guide for SQL Server Copyright 2014 Infor Important Notices The material contained in this publication (including any supplementary Information) constitutes and
Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance
Capacity planning with Microsoft System Center Mike Resseler Veeam Product Strategy Specialist, MVP, Microsoft Certified IT Professional, MCSA, MCTS, MCP Modern Data Protection Built for Virtualization
Cloud Based Application Architectures using Smart Computing How to Use this Guide Joyent Smart Technology represents a sophisticated evolution in cloud computing infrastructure. Most cloud computing products
1 SQL Server 2012 Optimization, Performance Tuning and Troubleshooting 5 Days (SQ-OPT2012-301-EN) Description During this five-day intensive course, students will learn the internal architecture of SQL
ImageNow for Microsoft SQL Server Best Practices Guide ImageNow Version: 6.7. x Written by: Product Documentation, R&D Date: July 2013 2013 Perceptive Software. All rights reserved CaptureNow, ImageNow,
WHITE PAPER Improve Business Productivity and User Experience with a SanDisk Powered SQL Server 2014 In-Memory OLTP Database 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Executive
Throwing Hardware at SQL Server Performance problems? Think again, there s a better way! Written By: Jason Strate, Pragmatic Works Roger Wolter, Pragmatic Works Bradley Ball, Pragmatic Works Contents Contents
Symantec Endpoint Protection 11.0 Architecture, Sizing, and Performance Recommendations Technical Product Management Team Endpoint Security Copyright 2007 All Rights Reserved Revision 6 Introduction This
Deploying and Optimizing SQL Server for Virtual Machines Deploying and Optimizing SQL Server for Virtual Machines Much has been written over the years regarding best practices for deploying Microsoft SQL
VERITAS Database Edition 2.1.2 for Oracle on HP-UX 11i Performance Report V E R I T A S W H I T E P A P E R Table of Contents Introduction.................................................................................1
I-Motion SQL Server admin concerns I-Motion SQL Server admin concerns Version Date Author Comments 4 2014-04-29 Rebrand 3 2011-07-12 Vincent MORIAUX Add Maintenance Plan tutorial appendix Add Recommended
Course Description This popular LearnItFirst.com course is a soup-to-nuts course that will teach you how to choose your edition, install, configure and manage any edition of. You ll learn the details of
How to overcome SQL Server maintenance challenges White Paper White Paper on different SQL server storage and performance management challenges faced by administrators and how they can be overcome using
Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS
vrops Microsoft SQL Server MANAGEMENT PACK User Guide TABLE OF CONTENTS 1. vrealize Operations Management Pack for Microsoft SQL Server User Guide... 3 1.1 Intended Audience... 3 2. Revision Notes... 3