November 10, 2009 Posted in programming  |  sql

Speed up your queries in SQL Server by being declarative

It’s interesting how a lot of the work I’ve been doing lately has in some way involved a kind of performance tuning. Previously I’ve talked about how to increase the performance of .NET applications by using delegates instead of reflection in code that runs frequently.

This time it is all about performance in database processing.

The scenario

Imagine an application that manages a wait list. Users of this application put themselves in line and wait for their turn to gain access to some kind of shared resource. Here are the basic rules of this system:

  • The same user can appear in the wait list multiple times, once for every resource she is queuing for.
  • The users’ position in the wait list at any given time is decided by a score.
  • This score is calculated based on the number of credits each user has in the system compared to the amount required by the resource they wish to access.

Let’s say that this wait list is modeled in a Microsoft SQL Server database with the following schema:


The position of the different users in the wait list is periodically updated by a Stored Procedure that calculates the current score for each and every row in the WaitList table.

So far so good. Now, imagine that this WaitList table contains somewhere around 30 millions rows, and the Stored Procedure that updates all of the scores takes about 9 hours to complete. And now we have problem.

The imperative SQL approach

Before going into all kinds of general database optimization techniques, let’s start off by looking at how that Stored Procedure is implemented.

Here is a slightly simplified version of it:

CREATE PROCEDURE CalculateWaitListScores_Imperative

DECLARE @rowsToCalculate INT

SELECT @rowsToCalcualte = COUNT(*)
FROM WaitList

WHILE ( @rowsToCalculate > 0 )

  DECLARE @resourceID INT
  DECLARE @score INT

  SELECT TOP 1 @userID = UserID, @resourceID = ResourceID
  FROM WaitList

  -- The actual calculation of the score is omitted for clarity.
  -- Let's just say that it involves a SELECT query that joins
  -- the [WaitList] table with the [User] and [Resource] tables
  -- and applies a formula that associates the values
  -- of the [Credit] columns in each of them.
  -- For the sake of this example we just set it to a constant value
  SET @score = 150

  UPDATE WaitList
  SET Score = @score
  WHERE UserID = @userID
  AND ResourceID = @resourceID

  SELECT @rowsToCalcualte = COUNT(*)
  FROM WaitList



If you aren’t into the Transact-SQL language syntax, let me spell out the algorithm for you:

  1. Get the number of rows in the WaitList table where the score has never been calculated
  2. If there are any such rows, get the user and the resource IDs for the first row in the WaitList table where the score has never been calculated
  3. Calculate the score for that user and resource
  4. Update the score with the newly calculated value
  5. Go to Step 1

In the worst case, this set of operations will be repeated 30 millions times, that is once for every row in the WaitList table. Think about it for a moment.

While looking at this code, I immediately imagined this dialogue taking place between SQL Server and the developer(s) who wrote the Stored Procedure:

Developer: Listen up, SQL Server. I want you to calculate a new score and update all of those 3o millions rows, but do it one row at a time.

SQL Server: That’s easy enough, but I’m pretty sure I can find a faster way to do this, if you’ll let me.

Developer: No, no. I want you to do exactly what I said. That way it’s easier for me to understand what’s going on and debug if any problem occurs.

SQL Server: Alright, you’re the boss.

Jokes aside, the bottom line here is this:

By implementing database operations in an imperative manner, you effectively tie up the hands of the query execution engine, thus preventing it from performing a number of optimizations at runtime in order to speed things up.

And that basically means trading performance and scalability for more fine-grained control.

The declarative SQL approach

Let’s see if we can make this Stored Procedure run any faster, by changing our approach to the problem altogether.

This time, we’ll tell the database what we want done in a declarative manner, and we’ll let the query execution engine figure out the best way to get the job done.

Here is a rewritten version of the original Stored Procedure:

CREATE PROCEDURE CalculateWaitListScores_Declarative

SET Score = dbo.CalculateScore(UserID, ResourceID)


What we did is basically removing the explicit loop and merging all operations into a single UPDATE statement executed on the WaitList table, which invokes a custom a scalar function named CalculateScore to calculate the score with the value of the current row.

Now, let’s look at some performance comparison:


That’s a pretty significant leap in speed. How is that possible? A look at the CPU usage on the database server while running the two versions of the Stored Procedure pretty much explains it all:

CPU usage while executing CalculateWaitListScores_Imperative:


CPU usage while executing CalculateWaitListScores_Declarative:


As you see, in the first picture the CPU is steadily at 9-10% and is basically using only one out of four available cores. This is because SQL Server is forced to do its work sequentially and has to wait until the score for the current row has been calculated and updated before proceeding to the next.

In the second picture, we are simply telling SQL Server our intent, rather than dictating exactly how it should be done. This allows SQL Server to parallelize the workload than can now be executed on multiple CPU/Cores at once leveraging the full power of the hardware.

Lessons learned

Here are a couple of getaways I learned from this exercise:

  1. SQL is a declarative language at its core, designed to work with sets of rows. That’s what it does best and that’s how you should use it.
  2. Whenever possible, try to avoid applying an imperative programming mindset when implementing database operations, even if  the constructs available in SQL-derived languages like T-SQL make it easy to do so
  3. Don’t be afraid to give up some control over what happens at runtime when your database code runs. Let the database find out the best way to do things, and get ready to experience some great performance improvements.

Hope this helps.


comments powered by Disqus