SQL Server Execution Plan Tutorial

Here is a great article written by Grant Fritchey on the basics of analyzing execution plans on SQL Server. It is actually the first chapter of his book on the topic. Yes, a whole book on SQL Server Execution Plans! I haven’t read the whole book but it should probably reveal itself very interesting for DBAs.

Most of us are familiar with the graphical representation of execution plans but Grant Fritchey shows us how to also get execution plans in an XML format and how they actually give more information that their graphical counterpart.

XML executions plans can be saved into files (.sqlplan) and can be viewed either in their XML form or in a graphical form. Being able to save execution plans in a file is very convenient as it makes it easy to share it with DBAs and peers to ask them about their opinion on a slow running query.

Note that on my machine I actually had to edit the .sqlplan file as the first line of the file was a piece of text reading “Microsoft SQL Server 2005 XML Showplan”. This made that the XML file was not well-formed and so could not be loaded by SQL Server Management Studio – this might be fixed in a later Service Pack or hotfix. Once I removed the line of text, I could successfully open the .sqlplan file in SQL Server Management Studio and see the graphical representation of the execution plan. I could also open the file in a text or XML editor and thus see the execution plan in a textual manner. I think that both approaches can prove themselves being complimentary.

The article also shows us how to collect XML execution plans through SQL Server 2005’s Profiler tool. Once execution plans are collected from the server they can be analyzed by DBAs and developers. This can reveal very useful when you want to profile and analyze activities on production environment where you can’t play with data as freely as on a development environment.

For my own sake, I have summarized the introductory materials on the topic hereunder. You might choose to not read it as it is anyway based on the article written by Grant Fritchey.

Execution Plan Introduction.

When you are tuning T-SQL code for performance on SQL Server, the most important information available to you is the Execution Plan. It tells you what kind of JOIN operations and other algorithms are executed as well as what indexes are used. This kind of information will reveal being crucial on poorly performing queries that you need to optimize.

Please keep in mind that this discussion is focused on DML T-SQL.

DML stands for Data Manipulation Language and is aimed at fetching or manipulating data. Basically, it is any SELECT, INSERT, UPDATE and DELETE statement.

DDL stands for Data Definition Language and is aimed at defining data structures. Basically, it is any CREATE, DROP and ALTER statement. DDL statements do not need any query optimization because there is always only 1 way to execute those statements. For example, there is only 1 way to create a table or an index.

When executing a T-SQL query, the T-SQL code is interpreted into instructions understandable by the Database engine. The Database engine is made of multiple processes/sub-engines but 2 are of particular interests regarding Execution Plans: the Relational Engine and the Storage Engine.

Note that in the context of this text, a Process does NOT mean a Windows Process but rather has the more generic meaning of a collection of instructions processing some data. It can be seen as a software module or component.

The Relational Engine.

The Relational Engine is responsible for 3 processes which are of interest in our study:

  • The Parser. The Parser receives the T-SQL query as input and outputs a parse tree or query tree. The parse tree represents the logical steps necessary to execute the query.
  • The Algebrizer. The Algebrizer receives the parse tree from the Parser process as input and resolves all datatypes, names, aliases and synonyms. The output is binary information called the query processor tree.
  • The Query Optimizer. The query optimizer is a piece of software that models the way in which the database relational engine works. Using the query processor tree together with the statistics it has about the data and applying the model, the Query Optimizer works out heuristically what it thinks will be the optimal way to execute the query – that is, it generates an optimized estimated execution plan.

The Storage Engine.

The Storage Engine will execute the estimated execution plan except if it judges that it should be modified. It could be the case if:

  • The estimated execution plan exceeds the threshold for parallel execution.
  • The statistics used to generate the plan are out of date.
  • The estimated execution plan is invalid (for example it creates temp table and so contains DDL statement)

The final execution plan, called the actual execution plan is what is actually executed by the Storage Engine. It might or might not be the same as the estimated execution plan.
Note that generally, there won’t be any differences between the esti¬mated and actual execution plans.

Execution Plan Cache.

As it is expensive for the Server to generate execution plans, SQL Server will keep and reuse plans wherever possible. As they are created, plans are stored in a section of memory called the Plan Cache.

Once the estimated execution plan is created, and before it gets passed to the storage engine, the optimizer compares this estimated plan to actual execution plans that already exist in the Plan Cache. This reuse avoids the overhead of creating actual execution plans. This is obviously beneficial for large and complex queries but also for simple queries which could potentially be called very often (hundreds or thousands of time).

Execution Plans can be removed from the cache in the following scenarios:

  • Memory is required by the system.
  • The “age” of the plan (its time-to-live) has reached zero.
  • The plan isn’t currently being referenced by an existing connection.

Also, cached execution plans might not be reused if an execution plan needs to be recompiled. Certain events and actions can cause a plan to be recompiled. Grant Fritchey enumerates those events in his article.

New Features in Visual Studio 2008

In this post I will go through some Visual Studio 2008 tools and features I found interesting.

1. Unit Test Tool

In VS 2008, Unit Testing is facilitated through a unit test class code generator.

Unit Testing is the act of having a piece of code which only purpose is to test another piece of code, this code being part of the end product. It is a particularly tedious task, so having a unit test code generator is very handy.

A Unit Test class is used to test a class that is part of the software being built. If every classes of a software has a matching class used to unit test it, all the code will be unit tested. Unit testing is the lowest level of Quality Assurance, it does not test the software as a whole neither on its external functionalities but rather makes sure that any testable piece of code part of the software is behaving as it should.

You should consider doing Unit Testing as:

  • It creates test units to check if the code is producing expected results.
  • It improves code coverage. Code coverage is a number telling how many percent of the code has actually been tested. The higher value the more confident we can be in the quality of the code (as it means that a large part of the code returns expected values).

To generate a Unit Test class in VS 20008, simply right-click on any class definition and select the Create Unit Tests option to call the unit test generator tool.

Visual Studio 2008 - Create Unit Test Class
This will create a Unit Test class in a separated project dedicated for Unit Testing, this way your Unit Test code and the actual code are separated in different projects and so code is neatly kept separated.

Note that if you try to generate a Unit Test Class for a class that has private or internal modifiers, VS2008 will add a special InternalsVisibleTo attribute in your original project so that your Unit Test project (and only that one) has access to all private, internal and protected methods and classes of the original project containing the classes you want to unit test. This means that the attribute is not added at the class level but at the project level in the AssemblyInfo.cs file.

Moreover, as it can be seen hereunder, only the Unit Test project (here called CalculatorTest) will have access to internal classes and methods:

[assembly: System.Runtime.CompilerServices.InternalsVisibleTo(“CalculatorTest”)]

Once the Unit Test classes are generated, you will have to go into them, inspect the code and set input values that will be used to call the methods that need to be tested and also set the expected result value that should be returned by the method. This way, the actual value returned can be compared with the expected value to decide if the test was conclusive or not. There are TODO sections declared in the generated code so that you can easily locate where to set test values.

In the example hereunder, values are set for the length and width of the rectangle and the variable expected contains the pre-calculated area of the rectangle so that we can check if the output of the method is equal to what should be returned. If the value returned by the method is not what is expected, that means that the unit test fail and that there is a bug in the tested method. Visual Studio will clearly show in the test results what unit test succeeded or failed.

Lastly, do not forget to comment out the Assert.Inconclusive() method call.

Visual Studio 2008 - Initialize parameter values for Unit Test
The Test project creates a vsmdi file in the solution item folder, named after the solution name, Calculator.vsdmi for a Visual Studio solution called Calculator.

To actually test some or all the unit test methods, open the vsdmi file, select the methods that you want to test, right-click on the list of methods and select Run Checked Tests.

Visual Studio 2008 - Choose Method To Unit Test
Once the test ran, the result will be displayed showing if the test succeeded, failed or was inconclusive.

Visual Studio 2008 - Unit Test Result Window

2. Object Test Bench

If you do not have the time to create full blown Unit Test but still want to test some of your classes and/or methods, there is a quick and dirty way to do it by using the Object Test Bench tool. The tool can call static methods on classes and create object instances of classes so that instance methods can be called ad-hoc. Methods can thus be tested very simply, in a similar way as when you chose to execute a Stored Procedure in SQL Server Management Studio and that a window pops up to let you input the SP parameters.

To use the Object Test Bench, you need first to create Class Diagram of your project. To do so, click on the Class View of your Visual Studio project, select the root namespace of your project, right-click on it and choose the View Class Diagram option.

Visual Studio 2008 - Generate Class Diagram From Source Code
Once the class diagram is created you can call the Object Test Bench tool by either calling a static method on the class or one of the class constructor so that you will be able to call an instance methods after the object is instanciated.

In my case I want to test the Add() method to check if my Calculator object correctly adds numbers. To do so I first instanciate a Calculator object which opens the Object Test Bench window under the class diagram. After the object is created in memory and appears in the Object Test Bench window, I can chose to call any method of the object. I will thus dynamically call the Add() method through the Visual Studio IDE and check if the method returns a correct result.

Visual Studio 2008 - Start Object Test Bench Tool - Create Object Instance
Visual Studio 2008 - Start Object Test Bench Window
A window will pop up so that you can give values to the input parameters.

Visual Studio 2008 - Object Test Bench Invoke Method And Set Input Parameters
Once the parameters are entered and you click on OK, a pop up window will display the result. You can choose to save the result in a variable so that you can re-use it later. For example you could re-use the variable as a parameter to call another method that you want to test-bench. This way, in a few clicks, you can create a bunch of objects and then re-use them later to call methods that take complex-type objects as parameter.

Visual Studio 2008 - Object Test Bench Invoke Method Result Window

3. Generate Method Stub (only for C#)

It is a code generation feature that creates methods before they exist; the method created is based on the call to that method. Once the call to the method is made, Visual Studio 2008 IntelliSense will give you the option to generate a method stub matching the call to that method, with the matching input parameters and return type.

Visual Studio 2008 - Generate Method Stub
The generated method will be created in the matching class and its stub implementation will simply throw a NotImplementedException.
Visual Studio 2008 - Generate Method Stub Result
Note that this code generation feature is only available for C#.

4. Refactoring Tools (C# only)

Refactoring is making changes to a body of code in order to improve its internal structure without changing its external behavior.

– Martin Fowler

It is useful concept to make code cleaner and more understandable/readable.

A typical refactoring case is to break up a lengthy method into separate methods. To do so, you can highlight a piece of a code, right-click on it (or go to the Refactor menu of Visual Studio) and choose Extract Method. This will generate a method containing the highlighted code as well as calling the generated method from the original location.

Refactor menu in Visual Studio 2008:

Visual Studio 2008 - Refactor Menu
Refactoring code by creating a method to shorten the original code, it is the action of extracting a method from a piece of code.

Visual Studio 2008 - Refactoring A Method
The example here above will create a method containing the highlighted code and replace the original code by a call to the generated method (which I called WriteLogToConsole):
Visual Studio 2008 - Refactoring, Generated Method

5. The .Net Framework Source Code

It is possible to go through the source code of the .Net Framework while debugging. Here are the steps I did to make it work:

1. Install the hotfix KB 944899 – Visual Studio 2008 performance decreases when you step through source code that you downloaded from Reference Source Server.

2.Configure Visual Studio debugger to be able to step in the .Net Framework Source Code:

  • Uncheck Enable Just My Code (Managed only).
  • Check Enable source server support.

Visual Studio 2008 - Configuring Visual Studio For Framework Source Code Debugging
3.Configure the Symbols part of the Visual Studio debugger options so that Visual Studio know where to download the .Net Framework debugging symbol (.pdb files) and source code.

  • Set the Symbol file (.pdb) location to be: http://referencesource.microsoft.com/symbols
  • Set a Cache location folder where the .Net Framework pdb and source code files will be stored. Make sure it is a location that your account has read/write access to. For example, a folder under your user hive.
  • Clear the Search the above locations only when symbols are loaded manually if you want that Visual Studio automatically download symbols and source code while you step in .Net Framework code (F11 shortcut key). Note that if your project is big and references many libraries, downloading all the debugging symbols will be slow at the first debug. If you prefer to load symbols only when needed, keep that box checked. You will then have to download debugging symbols and source code on demand by right-clicking the appropriate dll in the stacktrace and choose Load Symbol.

Here is how I configured my Visual Studio:

Visual Studio 2008 - Configuring Visual Studio Framework Source Code Symbol Location
Visual Studio 2008 is now all set to debug and step in .Net Framework Source Code!

While debugging, we can now see that the debugger call stack contains detailed file and line number information for the .NET Framework classes and methods:

Visual Studio 2008 - Debugger Call Stack
Example of use:

In the following screenshot, I stepped in a line of code that calls the ToString() method on a Double type, this makes that the mscorlib pdb file is downloaded as well as the source code for the Double structure so that I can actually debug into the Double type and see its implementation as written by the .Net team. That is something I find really cool and I think has been missing for a long time!

Visual Studio 2008 - Step In .Net Framework Source Code

Modules Window:

While you are debugging, you can bring up the Modules Wwndow by hitting the ALT+CTRL+U keys. This window shows all the dll loaded by the debugger and let you see which dll has debug information loaded and which does not. You can manually load debugging symbols from that window by right clicking on the library you want to load the symbols for and select the Load Symbols option.

Visual Studio 2008 - Debugger Modules Window

List of assemblies currently available at the time of writing for symbol/source loading:

  • Mscorlib.dll
  • System.dll
  • System.Data.dll
  • System.Drawing.dll
  • System.Web.dll
  • System.Web.Extensions.dll
  • System.Windows.Forms.dll
  • System.XML.dll
  • WPF (UIAutomation*.dll, System.Windows.dll, System.Printing.dll, System.Speech.dll, WindowsBase.dll, WindowsFormsIntegration.dll, Presentation*.dll, some others)
  • Microsoft.VisualBasic.dll

For reference, here is a lengthier blog post by Shawn Burke with more information regarding .Net Framework Source Code debugging.

6. SQL Metal

SQL Metal is used to help implementing LINQ to SQL scenarios. It is a command-line utility (sqlmetal.exe).

SQL Metal can:

  • Generate source code and mapping attributes or a mapping file from a database.
  • Generate an intermediate database markup language file (.dbml) for customization from a database.
  • Generate code and mapping attributes or a mapping file from a .dbml file.

What is a mapping file?
A mapping file is an XML file to specify mapping between the data model of the database and the object model of the .Net code. It keeps the mapping code out of the application code which helps in keeping the code cleaner and leaner. Moreover, since it is XML (like any other .config file) it can be changed without having to rebuild the application code.

Check out MSDN documentation for more information about SQLMetal.

7. Visual Studio 2008 Product Comparison

Great comparison between the different functionalities available on each edition of Visual Studio 2008: http://www.microsoft.com/en-us/download/details.aspx?id=7940