Jun 242010
 

A few days ago Tim Ford (blog | twitter) put forth a challenge to describe a transaction in a non-technical fashion.  I could have replied to the post directly, but I figured it would be rather wordy.  Instead I decided to follow the lead of Janice Lee (blog | twitter) and write a post about it, providing Tim with a neat and tidy hyperlink instead.

One thought I had is that committing a transaction is like jumping off a diving board.Diving Board

It’s Atomic, meaning it happens or it doesn’t, there is no in-between.  If you jump off the board, you’ll land in the water; the transaction is committed.  If you don’t, you’ll still be on the diving board as if nothing ever happened.  So long as gravity is in effect, there’s no way you can jump off the board and not land in the water.

It’s Consistent with respect to physics.  Matter was neither created nor destroyed, and if you left the board and are now in the water, you’ll be displacing that water.  As we used to say in Physics Van, Newton’s Laws of Motion are guaranteed to work at least 362 days a year, so you should be covered.

It’s Isolated as each dive event is not affected by others, even if they occur at the exact same time.  Safety Bear says that no more than one person can be on a diving board at a time, so there won’t be any interference from another diver on your board.  A pool may have more than one diving board, but they’re always spaced such that it’s not possible to interfere with the others.

Finally it’s Durable, because you can’t ever undo the fact that you did that dive.  Even if you exit the pool immediately and dry off there will still be evidence that you did it, such splash marks on the pool deck.  The pool will also contain slightly less water as further evidence that it happened.

And that’s about it for my horrible transaction analogy.  Stay tuned until next time, where I’ll compare negotiating with a SAN Administrator to removing your own spleen with a nail clipper!

Jun 222010
 

No more than an output to me, but it sure was to SQL Server!  This issue kept me busy over the weekend, and most definitely gave me a case of Saturday Night SQL Fever (as opposed to “SQL Saturday Night Fever”, for which the only cure is more karaoke).

When executing a stored procedure with an OUTPUT parameter, I always believed that the pre-execution state of a variable being written to by the output parameter didn’t matter, as it would be overwritten. A quick poll on Twitter showed me that I’m not alone, however I recently discovered this isn’t always the case.

Business requirements at my workplace are such that when a SQL Server Agent job needs to be modified, a script must be generated that does not update the job, but rather drops it and re-creates a new job. I was working on such a script (simplified and shown below) which uses the sp_delete_job and sp_add_job stored procedures to drop and add the job respectively.  Here’s what I came up with:

SET NOCOUNT ON;
DECLARE @jobId uniqueidentifier;
DECLARE @JobName NVARCHAR(128);
DECLARE @ReturnCode INT;
DECLARE @errCode INT;
SET @JobName = 'AJob';

BEGIN TRANSACTION;
SET @jobId = (SELECT job_id FROM msdb.dbo.sysjobs WHERE name = @JobName);

-- delete the job if it exists
IF(@jobId IS NOT NULL)
BEGIN
   EXEC @ReturnCode = msdb.dbo.sp_delete_job @job_id=@jobId
   IF(@@ERROR != 0 OR @ReturnCode != 0)
   BEGIN
      SET @errCode = @@ERROR;
      GOTO QuitWithRollback;
   END
   PRINT 'deleted job';
   --SET @jobId = NULL;
END

-- create the job
EXEC @ReturnCode = msdb.dbo.sp_add_job @job_name=@JobName,
      @enabled=0,
      @job_id = @jobId OUTPUT
IF(@@ERROR != 0 OR @ReturnCode != 0)
BEGIN
   SET @errCode = @@ERROR;
   GOTO QuitWithRollback;
END
PRINT 'added job';

COMMIT TRANSACTION;
RETURN;

QuitWithRollback:
IF(@@TRANCOUNT > 0)
   ROLLBACK TRANSACTION;

PRINT 'Err: ' + CAST(@errCode AS varchar(10)) + ' ret: ' + cast(@ReturnCode as varchar(10));

If you run this script on SQL Server 2008 SP1, you’ll see that the first time it’s run the job is added successfully. Running it again shows that the delete succeeds but the add fails and returns error 14274: “Cannot add, update, or delete a job (or its steps or schedules) that originated from an MSX server.”

Outputs

Multiple Outputs

Scouring the web for information on error 14274, I found plenty of articles & blog posts with regard to SQL 2000 talking about how this error occurs when you change the name of a machine, and that you need to fix it by altering values in the ‘originating_server’ column of the msdb.dbo.sysjobs table. The problem is that the machine definitely didn’t change names since the job was created, and in SQL 2008 there is no ‘originating_server’ column, as it has apparently been replaced by ‘originating_server_id’ instead.

After much head scratching, I found that the cause of this error was the state of the @jobId variable prior to the execution of sp_add_job. If you uncomment the “SET @jobId = NULL” line in the above example and run again, you’ll see that it completes successfully the first time and every time thereafter.

Checking out the T-SQL code that’s under the hood of sp_add_jobs (which admittedly I should have done sooner), I found this:

IF (@job_id IS NULL)
BEGIN
   -- Assign the GUID
   SELECT @job_id = NEWID()
END
ELSE
BEGIN
   -- A job ID has been provided, so check that the caller is SQLServerAgent (inserting an MSX job)
   IF (PROGRAM_NAME() NOT LIKE N'SQLAgent%')
   BEGIN
      RAISERROR(14274, -1, -1)
      RETURN(1) -- Failure
   END
END

There are 2 cases for @job_id: either it’s NULL and a GUID needs to be assigned, or it’s not NULL and an error will be thrown if it wasn’t called by the SQL Server Agent.

Looking at the sp_add_job entry in Books Online, it shows that the @job_id parameter is an OUTPUT parameter with no mention of an input value being used, hence why I assumed any value stored in the @jobId variable would simply be overwritten. BOL does mention “with a default of NULL” but that wording didn’t get the point across (to me at least). If one creates a variable with a default of NULL and assigns a value later the default is still NULL, how does that violate the documentation’s wishes?

In my opinion, there’s 2 ways to reduce confusion on this issue:

1) Modify BOL to mention a little more forcefully that @job_id must be NULL when adding a new job. Something like “passing a non-null variable to @job_id is for internal use only” would be a great improvement.

2) Return a more descriptive error code when @job_id is not NULL.  While I’m sure the error code makes sense if you have complete understanding of what’s happening inside sp_add_job, it’s not very helpful if all you’ve done is read BOL.

Jun 102010
 

I’ve never been a fan of Apple.  Despite making a living off SQL Server, I don’t consider myself to be a Windows fanboy by any stretch of the imagination either, but I never imagined that I would buy a Mac.

One of the easiest arguing points back in the day was the availability of software.  Games on a mac?  There was that sliding apple puzzle thing, that’s about it.  (Ok, there were more than that, but if I couldn’t play Doom II on it I wasn’t interested!)  Then they switched to Intel processors and OSX got better and better.  The software reason disappeared, and Mac hardware seemed every bit as capable as any PC I could buy or build, but it was still way more expensive than comparable PC hardware.  I also couldn’t stand the condescending attitude that Apple was breeding.  Those “I’m a PC, I’m a Mac” commercials really irked me.  For what little it’s worth to me, the hardware was starting to look really good, but aesthetics are pretty low on the list of things I’d consider while buying a computer.

My 6 year old Dell laptop has been dying a slow death for the past few months.  After some experimentation, I determined that the memory controller was going bad, and repairing it (i.e. a new motherboard) wouldn’t be very cost-effective for a machine of that age.

I bought myself a new laptop from Dell, and it arrived a few days later.  It was fast and did everything I wanted, so I was happy.  About a week after I got it I decided to start doing all the available system updates, one of which was a “Dell Rcommended” BIOS update.  I downloaded the updater and ran it.  After about 2 minutes, it froze at about 31%.  Knowing that your computer is a paperweight if you interrupt a BIOS flash, I took the patient approach.  An hour later it was still at 31%.  I decided to let it sit overnight just to be sure, and the next morning nothing had changed.  I tried forcing a shutdown by holding down the power button, but that didn’t do anything.  I ended up disconnecting the AC power and removing the battery to finally get it to turn off.  As was to be expected, it was now a brick and would do nothing for me.  After talking with Dell tech support, they determined I needed a new motherboard and overnighted one to a technician to install it.

The technician came out 2 days later and had a new motherboard installed in about 30 minutes.  Within another 30 minutes I had the same update frozen at 31% again.  Having bricked this computer twice within 10 days of receiving it, I decided that keeping it wasn’t the greatest idea.  I had either gotten faulty hardware twice, or the update was bad – shouldn’t they heavily test these things?  I returned it to Dell for a full refund.

After much thought, I decided to get a 15″ MacBook Pro.  I’ve heard a lot of good things about Mac hardware lately, the software seems extremely capable, and they’re pretty good looking too!  If I have to pay a premium for something that isn’t going to brick during a BIOS flash, than so be it.  (Yes, I did pay a lot more – it cost more than twice what the Dell did.)  I also view this as an excellent opportunity to really learn how to use a Mac.  Anyone can click on Safari while messing around in an Apple store, but I’m very interested in learning everything else there is to know about OSX.

Three weeks later, I’m still quite happy with my purchase.  The best thing I’ve learned so far is all the gestures for the trackpad – it makes things incredibly fast.  Yes, I have installed Windows 7 (in Parallels) though I really haven’t used it for much other than installing SQL Server Management Studio at this point.  My goal is to use Mac alternatives as much as possible on this machine.  I’ll be sure to report back on my progress.

Jun 072010
 

T-SQL Tuesday ImageThis post is my contribution to T-SQL Tuesday #007, hosted this month by the SQLChicken himself, Jorge Segarra (Blog | Twitter).

Since I’m a bad DBA and have yet to get my hands on a copy of R2, I’ll make use of the rule that we can discuss any feature from 2008.  My favorite is filtered indexes, as they were the solution to an issue I had been trying to fix for 5 years.

The Background

In a former life, I spent several years working as a building inspector for one of Chicago’s western suburbs.  It quickly became evident that they were in need of a system for keeping track of permits, inspections, and other day-to-day activities, as their gigantic spreadsheet clearly wasn’t cutting it.  Wanting to learn about databases and see if I could solve said problem, I picked up a book on Access 2000 and my solution was born.  It’s grown up a lot in the past 7 years, and is now used by 2 other municipalities as well.

The Problem

In my system, building permits can have one or two unique identifiers depending on their stage in the permit lifecycle.  That probably doesn’t make sense, so I’ll elaborate.  When a resident applies for a building permit, a record is created in the permit table with a primary key known as the PermitId.  This permit has a status of pending, as it has only been applied for and not issued.  Many permits never make it to the issued state, as they may be denied or their application withdrawn.  Permits that make it through the approval process and are issued receive a Permit Number, another unique identifier.  Enforcing uniqueness on the PermitNumber column was desired, but impossible as all pending permits had a NULL value for PermitNumber.  The best I could do for this (given my knowledge at the time) was to create a non-unique index on PermitNumber and hope that duplicate values didn’t end up occurring.  While that may be good enough for some, it always irked me.

Some may ask why PermitId and PermitNumber can’t just be combined.  The answer is that it’s a business requirement.  All three of the cities I do business with issue PermitNumbers in sequential order based on the permit issue date, so there’s no way the PermitNumber can be determined ahead of time.  Also the PermitId is an internal value and is only known to the database for use in joins.  PermitNumber, on the other hand, is known to the public and is commonly searched on, so indexing it is desirable.

The Solution

The Filtered Index feature allowed me to create a unique index on PermitNumber where the value is not null.  I was able to enforce uniqueness with permit numbers, and all was right in the world once again.  The syntax to do this was extremely simple:

CREATE UNIQUE INDEX UQ_Permits_PermitNumber
ON Permits(PermitNumber)
WHERE PermitNumber IS NOT NULL; -- The WHERE statement makes it a filtered index

And there you have it, a simple solution using elegant syntax.  ‘Tis a shame that this feature didn’t appear until 2008, but I’m sure glad it’s there now!

Jun 042010
 

At my previous employer I took part in several projects that involved migrating databases not only to new hardware but also to new versions of SQL Server.  We upgraded machines running the 2000 and 2005 versions of SQL Server to 2008.  Here’s how we went about doing it, and some helpful hints we figured out along the way.

Disclaimers

I’m fully aware there are lots of ways to migrate hardware and data between machines and database versions, I’ll just be talking about the way we did it.  We were fortunate enough to not have to worry about downtime, as this was carried out during scheduled maintenance windows and at other times when bringing affected products out of service was deemed acceptable.

How We Did It

We started out by taking full backups of all databases on the source instance and restoring them (WITH NORECOVERY) on the target instance.  All source databases were then placed in read-only mode, and differential backups were taken and restored to the target (this time WITH RECOVERY).  Target databases were then taken out of read-only mode and scripts were run on each database to perform a laundry-list of tasks such as:

  • DBCC CHECKDB to make sure our newly-restored DB is consistent
  • Change the database owner
  • Update the compatibility level
  • Ensure PAGE_VERIFY is set to CHECKSUM
  • Run sp_updatestats

After the scripts completed successfully, the application servers were pointed to the new instances either by editing configuration files or redirecting connections via DNS.  Applications were checked to make sure nothing broke, and the migration was complete.

Tips & Tricks

Do as much as you can in advance. The migration process I described above looks rather simple because it is, but a lot of work was done ahead of time to allow that to happen.  Scheduled Tasks and Maintenance Plans were all migrated in advance by scripting them out in Management Studio and creating them in a disabled state on the target instance (they were enabled once migration was complete).  SQL Server logins were transferred in a similar matter using the Microsoft-provided sp_help_revlogin stored procedure.

Practice makes perfect. Script out everything that you can in advance and “rehearse” the migration several times before it actually takes place, preferably on the new hardware before it is placed into service.  Scripting it all out and testing repeatedly will ensure that nothing is forgotten and should reduce the chance of surprises at launch time.  If the launch is happening in the wee hours of the morning, scripts will make sure you have much less to remember.

Change the database owner. Whatever SQL Server login or Windows User restores a database becomes the owner of that database.  If that’s not desired, be sure to change the database owner using sp_changedbowner (deprecated in SQL 2008) or ALTER AUTHORIZATION syntax instead (2005/2008 only).

Update the COMPATIBILITY_LEVEL. When you restore a database from an older version of SQL Server to a newer one, COMPATIBILITY_LEVEL doesn’t change.  If you want to take advantage of the syntax & functionality offered by the newer version, you’ll need to up this value to 90 (SQL 2005) or 100 (2008) using either sp_dbcmptlevel (deprecated in SQL 2008) or ALTER DATABASE (2008 only).

Ensure PAGE_VERIFY is set to CHECKSUM.  SQL 2000 doesn’t support CHECKSUM, so any database upgraded from 2000 will probably be set to TORN_PAGE_DETECTION.  CHECKSUM is slightly more cpu-intensive than TORN_PAGE_DETECTION, but it provides much more functionality and also isn’t deprecated.  Use it!  PAGE_VERIFY can be changed using the ALTER DATABASE syntax in both 2005 and 2008.

Run sp_updatestats. You’ll want to do this because each version of SQL Server keeps different statistics.  Running this will ensure you have all the necessary stats for your version.

Like I said before I’m sure there are much more elegant ways to migrate databases depending on your requirements, this is just how we did it.  Comments are most welcome!