Sep 292010
 

Most T-SQL guides and tutorials I’ve seen concentrate on the following transaction commands: BEGIN, COMMIT, and ROLLBACK, which I’ll henceforth refer to as the “big three”. While they’re certainly very important, there’s also a fourth command, SAVE, which is typically treated like a distant cousin who’s only seen at family reunions every few years. It’s a shame because while SAVE is not always necessary, it can be a huge help when you need it.

If you’ve ever found yourself in a situation where you have a multi-step process that takes a long time to complete, you’re probably using the big three transaction commands to ensure the process is atomic – that everything succeeds or everything fails. Now let’s say that somewhere in the middle of your process one of those steps fails. Your error handling code detects the failure and the transaction is rolled back as if nothing ever happened. This is an acceptable outcome, however all of the time that went into steps of the process that succeeded was for nought. What would really be awesome is a way to rollback only part of a transaction, so all the work that did succeed isn’t lost and your process can attempt to continue. This is exactly where savepoints come in.

What They Do

When you set a savepoint using the SAVE TRANSACTION command, a marker is created within the transaction. Should something go sour later on and you don’t want to rollback the entire transaction, you can rollback just to the savepoint. Everything that occurred before the savepoint will still be there, whereas everything that happened after it will be un-done. Once you rollback to a savepoint, you’ll have the same transactional options you had before: committing the transaction as-is, doing a rollback of the entire transaction, or performing more operations within the transaction and committing (or rolling back) when done.

I should emphasize again that the transaction remains open when rolling back to a savepoint. Savepoints will allow you to get back to a point during a transaction so that you can re-try a failed step (perhaps due to a deadlock or network error) but you still need to complete the transaction by either committing or rolling it all back. If you do a rollback to a savepoint and then forget about it, sooner or later you’ll be trying to figure out why your transaction log has grown so much (and it will be due to an open transaction!)

At this point you may be wondering why you can’t just create a transaction within a transaction and accomplish the same thing by rolling back the inner transaction. You can try, but it won’t work. While it is syntactically possible to create nested transactions, committing a nested transaction does nothing, and rolling one back will not produce the desired effect. I would prove this with a demo, however Paul Randal (blog | twitter) has already done an excellent job of that and I see no sense in duplicating his effort. Check out his blog post if you want to learn more.

How To Use Them

The instructions from Books Online show that the SAVE TRANSACTION command is rather simple. The only argument (which is required) is a name for the savepoint of up to 32 characters in length. You can create multiple savepoints per transaction and can rollback to any of them provided they have different names. If you create a savepoint with a name that’s already in use, the older one will be overwritten and only the newer one will be accessible.

To rollback to a savepoint, specify the savepoint name in the ROLLBACK TRANSACTION statement. Omitting the savepoint name parameter means it’s just a plain old rollback command and the entire transaction will be rolled back just as one would expect. Let’s try it out with some sample code:

-- ** Don't run this all at once!! **
-- You'll want to run these statements one at a time, checking the
--  contents of table 'tbl' in-between each statement!

-- Do this in AdventureWorks (or any other dev db) for safety
USE AdventureWorks2008;

-- create table and add some data
CREATE TABLE tbl (
  id INT PRIMARY KEY CLUSTERED,
  val VARCHAR(20)
);
INSERT INTO tbl VALUES (1,'foo');
INSERT INTO tbl VALUES (2,'foofoo');
INSERT INTO tbl VALUES (3,'foobar');
INSERT INTO tbl VALUES (4,'bar');

-- begin our transaction and add another row
BEGIN TRAN;
INSERT INTO tbl VALUES (5,'fubar');
-- if you select * from tbl, you should now see 5 rows

-- set a savepoint named "Savept"
SAVE TRAN Savept;

-- delete a record (oops!)
DELETE FROM tbl WHERE id = 3;
-- looking at tbl, you'll now see rows 1,2,4,5

-- rollback to the savepoint
ROLLBACK TRAN Savept;
-- tbl will now have 5 rows again, just like it did at the savepoint

-- roll it all back
ROLLBACK TRAN;
-- tbl will have 4 rows now just like before the transaction

DROP TABLE tbl;
-- clean up our mess :)

Convinced it’s nifty? Awesome! :) One final point I’d like to touch on is locking. Books Online for the SAVE TRANSACTION command emphasizes the fact that locks acquired during a transaction are held until the transaction completes. So what happens when you rollback to a savepoint? Any locks acquired after the savepoint will be released, just like all locks acquired by a transaction are released when the entire transaction is rolled back.

To see this for yourself, run through the above sample code again and open another query window to check the locks held by that process using sp_lock or the sys.dm_tran_locks DMV. You’ll notice that the locks associated with deleting row 3 disappear when rolling back to the savepoint. A notable exception to this behavior is if lock is escalated or converted during the transaction. If that is the case, the lock will not revert to its previous state, and it will not be released until the transaction commits or is rolled back.

So there you have savepoints. They aren’t always necessary, but if you’re really in a pinch they sure can come in handy!

Sep 212010
 

I had the immense pleasure of speaking at SQL Saturday #50 in Iowa City this past weekend. To say it was an awesome time would be quite the understatement. Special thanks to Michelle Ufford (blog | twitter), Ed Leighton-Dick (twitter) and Jeff Belina (twitter) for all you did to make this event possible. Also a giant thank you to all of the sponsors who made this event financially possible.

To start at the very beginning, I made it to Iowa City in the early afternoon and had extra time to do a little exploring. Caving to my fetish for historic buildings and architecture, I checked out the Old Capitol building and a bit of the University of Iowa campus surrounding it. My inner engineer was further satisfied at the speaker’s dinner later that evening, which was held at the Iowa River Power Company restaurant, a former hydroelectric power plant. In addition to an excellent view of the river and dam, I enjoyed some wonderful food and conversation! It was a great chance to catch up with familiar faces such as Wendy Pastrick (blog | twitter), Ted Krueger (blog | twitter), Jes Borland (blog | twitter) and Alex Kuznetsov (blog). I also got to meet Jason Strate (blog | twitter), Kathi Kellenberger (blog | twitter), Chris Leonard (twitter), Arie Jones (blog | twitter), Trenton Ivey (blog | twitter) and Louis Davidson (blog | twitter) in person for the first time.

Saturday, of course, was the main event. In addition to presenting, I was able to enjoy 4 other great sessions, including:

    • Arie Jones’ Take Control with Resource Governor and PBM. His presentation was most informative and very well put-together. I knew a few things about Resource Governor and Policy-Based Management going in, and knew a lot more about it 90 minutes later. I hope to be able to take advantage of both these features sometime in the future.

Bacon Pitas!

    • The Women in Technology panel, which took place during lunch. A standing-room-only crowd gathered to listen to experiences and insight from Jes Borland, Kathi Kellenberger and Wendy Pastrick, and moderated by Michelle Ufford. Food-wise, lunch included a smorgasboard of pizza to choose from, including some topped with #bacon – unless they’re weird, your DBA will eat it!
  • Monitoring Data Changes The Microsoft Way With Change Data Capture, again by Arie Jones. Much like earlier in the day, AJ was on top of everything and included some awesome DBA ninja tips as well as some great demos.
  • Stay Agile, Stay Sane by Kendra Little (blog | twitter). Kendra discussed her experiences being a DBA working with agile development teams, and some of the techniques her group had adopted as a result. It was extremely enlightening, and it appeared that Kendra’s team has successfully addressed a lot of the issues commonly encountered by DBAs. Extra kudos for her hand-drawn slide illustrations – way cooler than the images I got from Flickr!

As for my own presentation, I delivered “Application Coding Sins” to a pretty full room. Despite having a time slot immediately following lunch, my presentation must have been good enough to keep everyone awake because I didn’t notice any nappers. I’m sure my tactic of throwing candy and/or swag at those who asked questions didn’t hurt either! This was not my first time presenting database topics in front of a group, but it was my first time speaking at a SQL Saturday event. All in all I’m very pleased with how it went and got some great feedback from those who attended. If you’re looking for my slides and code samples, you can download them below.

Slides and Code Samples

And that’s all I’ve got for SQL Saturday #50. It was a great weekend, and the only real downside is that there’s so many great speakers talking about awesome stuff and no way to attend all of the sessions. If you’ve never been to a SQL Saturday before, you should really consider it. Check SQLSaturday.com for an upcoming event near you. I know I’ll submit sessions for future SQL Saturdays, and I really hope to see you there!

Sep 142010
 

This post is my contribution to T-SQL Tuesday #10, which is being hosted by Michael J. Swart (blog | twitter)

T-SQL TuesdayEach year my Boy Scout troop has a rummage sale as a fundraiser.  All the scouts and their families gather up their old clothes, books, toys, electronics, and any other treasures they no longer need and donate them to the troop, which sells them.  It’s basically a giant garage sale, except it’s done in the gym of the school we meet at.  Since everything sold comes from someone associated with the troop, many years ago it was decided to create an award for the person that donates the weirdest piece of crap most unique item each year.  This award comes in the form of a travelling trophy which started out as a child’s potty chair that was painted gold.  Each year, the winner is instructed to embellish the trophy with something new.  Here’s how it looks now!

As you can see, it’s grown from a simple potty chair mounted on a board into a lamp with a whole bunch of accessories, including a barbecue grill. I’m surprised nobody’s added a TV or fridge yet – then there’d really be no reason to leave!

So what does this have to do with indexes?  The bowels of this post contain something that I have embellished myself.  Last month Tom LaRock (blog | twitter) posted a script for helping to detect duplicate indexes.  He couldn’t claim credit for it as he said it was passed on to him a while ago and he’s unsure of the original source.  Similarly, I won’t claim credit for this other than to say I made a few changes I thought would be handy.  If you feel this script is lacking functionality that you think would be helpful, I invite you to change it and please share with the community when you’re done!

Why Duplicate Indexes Are Bad

Aside from serving no purpose, duplicate indexes can cause problems to occur which could require your attention, such as:

  • Wasted disk space – just like any other type of data, storing something more than once means it will take up space more than once as well.
  • Slower backups, restores, and integrity checks – more data means more time to read/write/check it all
  • Slower data modification operations – inserting, updating, and deleting rows will take longer when there’s more indexes that need updating
  • Increased chances of deadlock and blocking – More objects requiring locks for an operation means there’s a greater chance that another process has already acquired at least some of said locks
  • Increased transaction log growth and/or usage – The more writing/locking that’s happening due to an increased number of objects means that more will be written to the log

And I’m sure there’s plenty more potential issues than those I just listed. The simplest way to avoid this is to maintain only as many indexes as necessary. This script can help you find if any duplicates exist.

What’s New In This Version

  • I removed the columns that returned individual component columns of the indexes.  Since there was already a column that showed them all concatenated together, I couldn’t think of any reason for listing them individually other than it might help when programmatically dropping indexes, which shouldn’t be done anyway.  Indexes should only be dropped after careful review.
  • Added a column for displaying the size of the index.  Duplicate indexes are bad no matter what their size is, however this information may be helpful.
  • Added columns showing whether the index is filtered or has included columns.

Disclaimer

I have tested this script on SQL Server 2005 and later, and it will not run on earlier versions.  You should not trust any script you find on the internet (including this one!)  Make sure you completely understand what a script is doing before running it.  Always test on a testing or development server before using it in any production environment.

The Script

What you’ve really been waiting for.  You can see the entire script below, or just download the .zip of it here.  Enjoy!

-- This script will generate 3 reports that give an overall or high level
-- view of the indexes in a particular database. The sections are as follows:
-- 1.  Lists ALL indexes and constraints along with the key details of each
-- 2.  Lists any tables with potential Redundant indexes
-- 3.  Lists any tables with potential Reverse indexes

--  Create a table variable to hold the core index info
CREATE TABLE #AllIndexes (
   TableID INT NOT NULL,
   SchemaName SYSNAME NOT NULL,
   TableName SYSNAME NOT NULL,
   IndexID INT NULL,
   IndexName NVARCHAR(128) NULL,
   IndexType VARCHAR(12) NOT NULL,
   ConstraintType VARCHAR(11) NOT NULL,
   ObjectType VARCHAR(10) NOT NULL,
   AllColName NVARCHAR(2078) NULL,
   ColName1 NVARCHAR(128) NULL,
   ColName2 NVARCHAR(128) NULL,
   IndexSizeKB BIGINT NULL,
   HasFilter BIT NOT NULL,
   HasIncludedColumn BIT NOT NULL
);

DECLARE @ProductVersion NVARCHAR(128);
DECLARE @MajorVersion TINYINT;
DECLARE @loadIndexSQL NVARCHAR(4000);

SET @ProductVersion = CONVERT(NVARCHAR(128), SERVERPROPERTY('ProductVersion'));
SET @MajorVersion = CONVERT(TINYINT, LEFT(@ProductVersion, CHARINDEX('.', @ProductVersion) - 1));

SET @loadIndexSQL = N'
	INSERT INTO #AllIndexes (TableID, SchemaName, TableName, IndexID, IndexName, IndexType, ConstraintType,
		ObjectType, AllColName, ColName1, ColName2, IndexSizeKB, HasFilter, HasIncludedColumn)
	SELECT o.object_id, -- TableID
		u.[name], -- SchemaName
		o.[name], -- TableName
		i.index_id, -- IndexID
		i.[name], -- IndexName
		CASE i.[type]
			WHEN 0 THEN ''HEAP''
			WHEN 1 THEN ''CL''
			WHEN 2 THEN ''NC''
			WHEN 3 THEN ''XML''
			ELSE ''UNKNOWN''
		END, -- IndexType
		CASE
			WHEN (i.is_primary_key) = 1 THEN ''PK''
			WHEN (i.is_unique) = 1 THEN ''UNQ''
			ELSE ''''
		END, -- ConstraintType
		CASE
			WHEN (i.is_unique_constraint) = 1 OR i.is_primary_key = 1 THEN ''CONSTRAINT''
			WHEN i.type = 0 THEN ''HEAP''
			WHEN i.type = 3 THEN ''XML INDEX''
			ELSE ''INDEX''
		END, -- ObjectType
		(SELECT COALESCE(c1.[name], '''')
			FROM sys.columns AS c1
			INNER JOIN sys.index_columns AS ic1 ON c1.object_id = ic1.object_id
				AND c1.column_id = ic1.column_id
				AND ic1.key_ordinal = 1
			WHERE ic1.object_id = i.object_id
				AND ic1.index_id = i.index_id) +
			CASE
				WHEN INDEX_COL(''['' + u.[name] + ''].['' + o.[name] + '']'', i.index_id, 2) IS NULL THEN ''''
				ELSE '', '' + INDEX_COL(''['' + u.[name] + ''].['' + o.[name] + '']'', i.index_id, 2)
			END +
			CASE
				WHEN INDEX_COL(''['' + u.[name] + ''].['' + o.[name] + '']'', i.index_id, 3) IS NULL THEN ''''
				ELSE '', '' + INDEX_COL(''['' + u.[name] + ''].['' + o.[name] + '']'', i.index_id, 3)
			END +
			CASE
				WHEN INDEX_COL(''['' + u.[name] + ''].['' + o.[name] + '']'', i.index_id, 4) IS NULL THEN ''''
				ELSE '', '' + INDEX_COL(''['' + u.[name] + ''].['' + o.[name] + '']'', i.index_id, 4)
			END +
			CASE
				WHEN INDEX_COL(''['' + u.[name] + ''].['' + o.[name] + '']'', i.index_id, 5) IS NULL THEN ''''
				ELSE '', '' + INDEX_COL(''['' + u.[name] + ''].['' + o.[name] + '']'', i.index_id, 5)
			END +
			CASE
				WHEN INDEX_COL(''['' + u.[name] + ''].['' + o.[name] + '']'', i.index_id, 6) IS NULL THEN ''''
				ELSE '', '' + INDEX_COL(''['' + u.[name] + ''].['' + o.[name] + '']'', i.index_id, 6)
			END +
			CASE
				WHEN INDEX_COL(''['' + u.[name] + ''].['' + o.[name] + '']'', i.index_id, 7) IS NULL THEN ''''
				ELSE '', '' + INDEX_COL(''['' + u.[name] + ''].['' + o.[name] + '']'', i.index_id, 7)
			END +
			CASE
				WHEN INDEX_COL(''['' + u.[name] + ''].['' + o.[name] + '']'', i.index_id, 8) IS NULL THEN ''''
				ELSE '', '' + INDEX_COL(''['' + u.[name] + ''].['' + o.[name] + '']'', i.index_id, 8)
			END, -- AllColName
		(SELECT COALESCE(c1.[name], '''')
			FROM sys.columns AS c1
			INNER JOIN sys.index_columns AS ic1 ON c1.[object_id] = ic1.[object_id]
				AND c1.[column_id] = ic1.[column_id]
				AND ic1.[key_ordinal] = 1
			WHERE ic1.[object_id] = i.[object_id]
				AND ic1.[index_id] = i.[index_id]), -- ColName1
			CASE
				WHEN INDEX_COL(''['' + u.name + ''].[''+ o.name + '']'', i.index_id, 2) IS NULL THEN ''''
				ELSE INDEX_COL(''['' + u.[name] + ''].['' + o.[name] + '']'', i.[index_id],2)
			END, -- ColName2
			ps.used_page_count * 8, -- IndexSizeKB' + CHAR(13);

			IF @MajorVersion >= 10
				SET @loadIndexSQL = @loadIndexSQL + 'i.has_filter';
			ELSE
				SET @loadIndexSQL = @loadIndexSQL + '0';

			SET @loadIndexSQL = @loadIndexSQL + ', -- HasFilter' + CHAR(13);

			IF @MajorVersion >= 9
				SET @loadIndexSQL = @loadIndexSQL + 'CASE WHEN (SELECT COUNT(*) FROM sys.index_columns ic WHERE ic.object_id = i.object_id AND ic.index_id = i.index_id AND ic.is_included_column = 1) >= 1 THEN 1 ELSE 0 END';
			ELSE
				SET @loadIndexSQL = @loadIndexSQL + '0';

			SET @loadIndexSQL = @loadIndexSQL + ' -- HasIncludedColumn
	FROM sys.objects o WITH (NOLOCK)
		INNER JOIN sys.schemas u WITH (NOLOCK) ON o.schema_id = u.schema_id
		LEFT OUTER JOIN sys.indexes i WITH (NOLOCK) ON o.object_id = i.object_id
		LEFT OUTER JOIN sys.dm_db_partition_stats ps WITH (NOLOCK) ON ps.[object_id] = i.[object_id] AND ps.[index_id] = i.[index_id]
	WHERE o.[type] = ''U''
		AND o.[name] NOT IN (''dtproperties'')
		AND i.[name] NOT LIKE ''_WA_Sys_%'';
';

EXEC sp_executesql @loadIndexSQL;

-----------

SELECT 'Listing All Indexes' AS [Comments];

SELECT TableID, SchemaName, TableName, IndexID, IndexName, IndexType, ConstraintType, ObjectType, AllColName, IndexSizeKB, HasFilter, HasIncludedColumn
   FROM #AllIndexes
   ORDER BY TableName;

-----------
SELECT 'Listing Possible Redundant Index keys' AS [Comments];

SELECT DISTINCT i.TableName, i.IndexName,i.IndexType, i.ConstraintType, i.AllColName, i.IndexSizeKB, i.HasFilter, i.HasIncludedColumn
   FROM #AllIndexes AS i
   JOIN #AllIndexes AS i2 ON i.TableID = i2.TableID
      AND i.ColName1 = i2.ColName1
      AND i.IndexName <> i2.IndexName
      AND i.IndexType <> 'XML'
   ORDER BY i.TableName, i.AllColName;

----------
SELECT 'Listing Possible Reverse Index keys' AS [Comments];

SELECT DISTINCT I.TableName, I.IndexName, I.IndexType, I.ConstraintType, I.AllColName, I.IndexSizeKB, I.HasFilter, I.HasIncludedColumn
   FROM #AllIndexes AS I
   JOIN #AllIndexes AS I2 ON I.TableID = I2.TableID
      AND I.ColName1 = I2.ColName2
      AND I.ColName2 = I2.ColName1
      AND I.IndexName <> I2.IndexName
      AND I.IndexType <> 'XML';

DROP TABLE #AllIndexes;
Sep 012010
 

Let’s say you’re starting off in a new environment (new job, department, client, whatever) and after being shown your workspace and meeting with your supervisor, you sit down to start figuring out their databases.  You send an email requesting the name of the development database, and the reply you get is as follows:

DevDB1\Blah, 1436  (It's a named instance)

The backslash makes the fact that it’s a named instance pretty evident, and you know that named instances all get their own ports, so you think there’s not much to see here.  You fire up SSMS and connect to DevDB1\Blah and can’t connect.  After some head scratching, you find that you can only connect when you specify DevDB1\Blah, 1436.

“What an awful instance name!” you think to yourself, and thoughts of creating instance naming standards pop up on your internal to-do list.  While doing some research a little later you realize that spaces and special characters aren’t allowed in instance names.  Logging in again and running SELECT @@SERVERNAME confirms that the instance name is indeed DevDB1\Blah.  Some more head scratching reveals that the ", 1436" part specified the port to connect to SQL Server on, and that was necessary because your machine can’t connect to the SQL Server Browser Service.

What’s it do?

While commonly thought of as “one of those other SQL Server services”, life without the SQL Server Browser Service would be a bit more of a hassle. Its main purpose is to aid in the detection of SQL Server instances and provide information necessary for connecting to them.  It listens on UDP port 1434 and returns port numbers and version information for requested instances on a machine.  Its precursor originated in SQL 2000 as part of the SQL Server service.  In SQL Server 2005 it was split into its own separate service.

Each instance of SQL Server is assigned its own TCP network port for client connections.  The default instance listens for connections on TCP port 1433 unless configured to use a different static port.  Named instances can also be configured to use a static port, but by default are assigned dynamic ports that can change each time the service is restarted.  This dynamic port allocation is not allowed on the default instance.

When a user connects to the default instance, the connection is passed to TCP 1433 automatically.  When connecting to a named instance and the port number is unknown, the first connection made is to the Browser Service at UDP 1434, which then returns the proper port number on which to initiate the TCP connection.  If the Browser Service isn’t running or is otherwise inaccessible, the connection will fail unless the port number is specified in the connection string.  Specifying a port number means the browser service is not needed, so it is bypassed entirely.

The Proof is in the Packets

I decided to have some fun and see what’s really happening for myself.  To do this, I used 2 different tools:

  • PortQry is a free download from Microsoft that does exactly as its name describes.  It allows you to query a specific port on a specific machine and see what it returns.
  • Wireshark is a free network protocol analyzer that allows you to listen for any and all packets received by your network card(s).  If you’re interested in really knowing what’s going over the wire, this is the tool for you!

I’m sure you could arrive at similar results with other tools that may or may not be free, but these are what I’ve used for a while and am quite happy with.

My first trick is to query the browser service to return information about all instances on a box.  For this, I’m using PortQry.  All you need to do to get this information is query UDP port 1434 on a machine running SQL Server.  The syntax to do that is as follows:

PortQry.exe -n <Machine Name> -e 1434 -p UDP

And you should see a response that looks something like this. (All screenshots can be enlarged by clicking on them.)

(Click to Enlarge)

As you can see when I query my test machine, Bloomingbank, it returns info about the 2 instances of SQL Server on that machine.  One is the default instance (always named “MSSQLSERVER”) which is available at TCP 1433, and the other is named “COONLEY”, available at TCP 1056.

That’s only part of what the SQL Server Browser Service does though.  It also helps facilitate individual connections to instances when necessary.  This behavior can be observed with Wireshark, which can return information about any network traffic that’s taking place at all.  For this test, I connected to both default and named instances of SQL server with varying combinations of browser service status and port specifications.  I used the SQLCMD client instead of SQL Server Management Studio because SSMS makes lots of queries when connecting to a server in order to populate the object explorer.  SQLCMD is much simpler and only does what you tell it.

In my first test, I’m going to see how the SQL Server Browser Service performs in a normal usage situation, simply logging into a named instance.  I have Wireshark fired up and listening to all communications with my database, I open a command prompt and run SQLCMD against the named instance and see what happens.

Coonley Console

After creating the connection with SQLCMD on the left, Wireshark (on the right) picked up 2 packets using the UDP Protocol, which are circled.  These are the request to the browser service and its response.  After that, TCP communication begins on the port that the service specified.  Looking at the highlighted section in the lower part of the Wireshark screenshot, you can see the response from the browser service containing the server name, instance name, version, and port to connect on.

So what happens if the browser service isn’t available?  In the next case I’ll run the exact same SQLCMD syntax except now the SQL Server Browser Service has been disabled.  Disabling the service is rather easy and can be done from the SQL Server Configuration Manager.  Wireshark shows the following:

Now we see repeated UDP calls to the browser service, but no replies.  After 5 tries and no response, the client gives up and the connection attempt fails.  The SQL Server instance is up and running just fine, but since the browser service isn’t available to tell the client how to connect, it will never succeed.

It’s important to note that the SQL Server Browser Service is not required for connections to a named instance.  If you know the port that the named instance is listening on ahead of time, there’s no need to connect to the service at all.  For this next test I’ll keep the browser service disabled and try to connect using SQLCMD, but this time specifying the port in my connection string.

This time we observe that no UDP packets are transmitted at all, meaning the browser service was never queried.  Once the addresses are resolved, the TCP communication with the instance begins immediately. Trying to make the same connection with the browser service enabled would yield the same result.  If the client is told what port to connect on, it sees no need to query the browser service for port information.

After all this talk about named instances, let’s switch gears and talk about the default instance a little bit.  I’ll perform the same test using Wireshark while connecting to the default instance with the browser service re-enabled:

Again there’s no UDP packets, meaning the browser service was not utilized.  This makes sense because we’re connecting to a default instance on its default port, which is TCP 1433.  Since the client is correctly assuming the port, it doesn’t see the need to waste the time and effort of querying the browser service.

But what if the client assumes the port incorrectly?  As I mentioned above, it is possible to configure the default instance to use a different static port using the SQL Server Configuration Manager.  I changed my default instance to use TCP 4858 instead of the default port.  I’ll run the same query as above:

You’ll notice the result is a lot more colorful now. The client always assumes that the default instance can be found on the default port. When this is not the case it will try to connect multiple times but will never succeed. In the trials I ran, SQLCMD gave up and timed out after about 8 seconds of trying to connect. After some digging around, I was able to find a Microsoft blog post confirming that their client network libraries do not attempt to dynamically discover a TCP port for the default instance. They suggest several solutions for situations where the default instance is not on the default port, the simplest of which is just to specify the correct port number in your connection string.

The Chart!

If there’s something to be learned, I’ll probably find a way to make a chart out of it! In this case I’ve taken all of the above results and put together charts showing client behavior for both the default instance and named instances.

Further Reading

SQL Server Resolution Protocol Specification – a short (~30 pages) yet interesting read about the protocol the browser service uses for communication.  Microsoft was kind enough to release it as a completely open standard.