System Center – SQL Server 2008 R2 Performance Tuning

In this article I would like to share some SQL Server performance tuning tricks which will give an extra boost to your System Center infrastructure.

I will focus on SQL Server 2008 R2 as this is the latest SQL Server edition. My attempt was to create a generic tuning guide for System Center but because I am a SCOM guy some topics are just about SCOM.

I would like to thank Roger Breu, Data Warehouse Solutions Specialist  from Microsoft Western Europe for his valuable input and review. Thanks Roger Smiley!

This article is divided into seven sections:

  1. Choose SQL Server Edition
  2. Install SQL Server
  3. Configure SQL Server
  4. SQL Server Properties
  5. Install Databases
  6. Database Properties
  7. Optimizing Performance

1) Choose SQL Server Edition

Standard or Enterprise?

If you are going to setup SQL Server in a virtual environment and you want to use Dynamic Memory you have to make sure to choose the appropriate SQL Server edition!

SQL Server Standard: It is fully supported by Microsoft to use it in a virtual machine BUT it cannot deal with Dynamic Memory. The Standard edition just allocates the memory which will be assigned at start up, that’s it. It cannot free up its memory.

SQL Server Enterprise or Datacenter: These editions are fully supported by Microsoft to use it in a virtual machine BUT these editions are able to deal with Dynamic Memory. Why? Because these editions have a hot-plug / hot-add memory feature, which in our case will be able to free- or add-up virtual memory.

Conclusion: If you are going to use SQL Server Standard set the memory in your virtual machine to a fixed size! If you want to use Dynamic Memory you must use SQL Server Enterprise or Datacenter edition.

SQL Server support policy in a virtualized environment:

Running SQL Server with Hyper-V Dynamic Memory

2) Install SQL Server


When you install SQL Server you must use SQL Server collation SQL_Latin1_General_CP1_CI_AS ! This is used for most of the upcoming System Center 2012 products except e.g. Service Manager 2012! If you are not sure check TechNet for pre-requisites. For SCOM 2012 it is SQL_Latin1_General_CP1_CI_AS !

For System Center 2012 products you will find the supported configurations here

How do I setup this collation? Well, after you started the SQL Server setup you will hit sooner or later the “Server Configuration” dialog. There is a tab called “Collation”. Don’t use the default collation (CHANGE IT!) On this screen you must select “Customize…”


Change the collation it to SQL_Latin1_General_CP1_CI_AS



I would recommend to install SQL Server using “SQL Server and Windows Authentication” mode. This way you always have a “backdoor” to access your SQL Server using a SQL account. Of course you have to choose the setting according to your company security policies.

You can choose to setup a sa account during SQL Server setup or change it later selecting SQL Server properties and switching to “Security”.


3) Configure SQL Server

Display Current Settings

In order to get an overview on how your SQL Server is being configured you can fire the query:

   1: SELECT name, value, value_in_use, [description]

   2: FROM sys.configurations

   3: ORDER BY name;

This will show you a table of the values which are configured. We will discuss some of the configurations settings in the next sections.


Of course you could just run a query to get a subset of information back by using the WHERE clause:

   1: SELECT name, value, value_in_use, [description]

   2: FROM sys.configurations

   3: WHERE name='max degree of parallelism' or name='backup compression default' 

   4: or name='max server memory (MB)'

   5: ORDER BY name;


4) SQL Server properties

First we will check the SQL Server properties itself.


Maximum Sever Memory Setting

After a successful SQL Server installation set “Maximum Sever Memory” to the appropriate size. What does that mean? By default the SQL Server engine takes all memory available from your system and doesn’t leave any memory for the operating system. If you have just the SQL Server engine installed on a Windows server leave as rule of thumb 2-3 GB for the operating system. Making these settings limits the SQL Server not to eat up all your memory.

Glenn Berry SQL Server MVP suggests the following starting values:

Physical RAM Max Server Memory Setting (MB)
4GB 3200
6GB 4800
8GB 6200
16GB 13000
24GB 20500
32GB 28000
48GB 44000
64GB 59000
72GB 67000
96GB 90000

You can set these values on the server properties dialog, pick “Memory”


Boost SQL Server Priority

“Boost SQL Server priority” is a setting which you should not select because it raises the SQL process priority on your system to high. Raising the priority too high may drain resources from essential operating system and network functions, resulting in problems shutting down SQL Server or limiting other operating system tasks on the server.


Compress Backup

A very useful option is to turn on compression. This will make SQL Server to use its native compression to compress each backup by default. If you believe it or not this option could give you some performance boost. Because backups using compression are up to 3 times as fast as regular backups. Therefore you will have less IO = more performance. In addition you will save lots of disk space.


Default Path

Set the database default paths. This is self-explaining just put the default path for data and log files. Make sure you put the data and the log file on separate drives. This way all your files will be placed into the right locations right away!


Max Degree of Parallelism (MAXDOP) Optional

MAXDOP is set by default to 0. This option decides if SQL Query Analyzer will distribute an expensive, complex query across all processor cores or not. 0 means it will try to distribute the query across all cores, if you set it to 1 the queries will suppress the parallel execution plan and the operation will be using only one scheduler (= Core or Thread)executed serially. I have not found a reliable source which says do it this way or that way. Just play with it and see if you get any performance benefits. But DWH workload definitely benefits if you set the option to “0”. OLTP applications often run better if you set it to 1. MAXDOP=1 is a recommended setting for SharePoint and SAP for example.


5) Install Databases

Database Sizing

If you want to calculate the estimated size of your SCOM databases there is a cool Excel sheet to do that.


To install the SCOM databases on SQL Server 2008 R2 use the DBCreatewizard.exe you can find it on SCOM source in the \SupportTools folder . The tool itself is self-explaining and you will find plenty of information using bing. There is just one point I would like to point out. If you are trying to setup the OperationsManagerDW database using this tool and choose a database size e.g. 100 GB it could be that it will throw an error. If the db reaches a certain size the tool won’t be able to create the db. A workaround is to choose 10 GB for the database size and if successfully created resize the database using the SQL Server Management Studio Smiley.


6) Database Properties

The next settings apply to the database properties.


General Database Settings

In general the following settings are recommended. Make sure you got the same settings for all your SCOM databases. If you you have different settings for your SCOM databases change it!


SCOM DB Settings (Default, Recommended)

Database Recovery Model Autogrowth Auto Shrink Auto Close
OperationsManager simple false false false
OperationsManagerDWH simple by 10% false false

7) Optimizing Performance

What is tempdb?

The tempdb is used as the name insists for holding temporarily objects which are being generated during database operations. As you can imagine, this db will be used heavily. But the data in tempdb is not build to store its data forever. Every time SQL Server starts the tempdb is being re-created and therefore the system always starts with a clean copy of the database.

The default data file 8MB and the log file 1MB (autogrowth is set to 10%). This means the tempDB will grow in very small chunks and therefore the performance will suffer.


Autogrowth tempdb

A better way is to set the autogrowth for data and log file is to set it to a fixed size. This way the tempdb will grow just once but in a big chunk instead many times in small chunks. E.g. determine your current tempdb data file size. If it is e.g. 10GB, set the data file autogrowth size “In Megabytes” to 500MB. The same applies to the log file.


Split tempdb Data Files

It is recommended to split your tempdb data files into equal parts. What does that mean? Let’s say if you have at least 4 cores in your SQL server split up your tempdb data files in 4 equal sized data files. This means, if your tempdb grew over a few days to 2 GB create 4 x 500MB data files. If you have 8 cores available split the tempdb File into 8 equal pieces in our example this would be 8 x 250 MB chunks. But how do I choose the appropriate size? Go here .

If you want to create more than 8 data files then you must first analyze the allocation contention but this I will leave to the SQL gurus Source .You can also create too many data files as discussed here .

As an example I divided a single 2GB data file into four 500MB chunks.


But how about the templog files? There you don’t need to split up into multiple log files as we have done with the data files. Why? Because they are written sequentially and therefore you would not benefit from multiple files.

Where should you place the tempdb files? It is recommended that you place the tempdb on a separate array (RAID) or dedicated LUN if possible. If you can’t put tempdb on a separate array or LUN try to separate database file *.mdf and log file *.ldf on separate disks. It is also recommended that you always use the fastest storage you got for these files.

Virtual Log Files (VLF)

This procedure applies to all databases and its transaction log files. What are VLF? Internally a transaction log consists of multiple blocks, called virtual log files. Every time a transaction log grows, it is done by adding additional virtual log files. The number added varies from 4 to 16, depending upon the size the log is grown by.

The problem is that a transaction log created with the default size and growth may grow repeatedly, resulting in hundreds of virtual log files. You can see how many you have with the (undocumented) DBCC LOGINFO command. The reason this is bad for performance is due to the nature of log writes. The transaction log is written sequentially, and if it is on a dedicated disk the disk head remains static between writes (no seek time). Multiple virtual log files may be scattered over the disk, resulting in a seek every time a write crosses from one file to the next. This increases write latency.

The solution is to back up the transaction log, (or perform a checkpoint if the recovery model is simple), shrink the log file to a minimum size, and then immediately grow it to the biggest size you think you need. This should result in, at most, 60 VLFs (depending on the log file size). If you have the opportunity to defragment the disk before doing this, even better. (Source)

Here I ran DBCC LOGINFO the output were 8 rows therefore I got 8 VLF.


Try now to shrink the log files: Select the database, right click and choose “Tasks\Shrink\Files”


Choose the log file you want to shrink and click “OK”


After shrinking you should have less VLF. Check by running DBCC LOGINFO again.

In the next post I will write about maintaining the performanceSmiley.

Happy performing!


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s