Mar 112014
 

T-SQL Tuesday LogoThis post is part of T-SQL Tuesday #52, which is being hosted this month by Michael J. Swart (@MJSwart). Michael is asking us to argue against a popular opinion, and I’m more than happy to do so, as this is a belief that I’ve kept to myself for quite a while.

SQL Server’s row compression feature can be an amazing tool. Not only is it lightweight on CPU usage, especially when compared to page compression, but it can save a significant amount of disk space as well. Your data also remains compressed while in the buffer pool, meaning more rows can be stored in memory, reducing the need to make slower requests to disk. On top of all that, some queries (especially those involving index scans) can see dramatic performance improvements.

In fact, row compression is so good that Microsoft’s whitepaper actually states “If row compression results in space savings and the system can accommodate a 10 percent increase in CPU usage, all data should be row-compressed.”

Yes, row compression is a wonderful thing, and the databases I maintain frequently benefit from its use.

But I hate it.

Why? Because all too often, features designed to help make things easier also make people lazy.

By far, the biggest issue that row compression addresses is poor data typing, the use of a data type that isn’t appropriate for the values at hand. For example, if a column is only going to store the values 1 through 5, an integer data type is not necessary. A tinyint data type would be just as effective, and would consume only one quarter of the space. However if you are unable to change the data type, perhaps because the database is part of an application written by a third party, row compression can be a big help.

Row compression allows the storage engine to treat fixed-width data types as if they were variable-width. This means the disk space that isn’t needed to store a value in a fixed-width column and would typically go unused can be put to work. The savings can be tremendous, and SQL Server’s data compression features are completely transparent to end users or applications – all the magic happens behind the scenes, which is why a third party application would be none the wiser.

But what if you are able to change those data types, and just don’t feel the need to do so anymore? Data compression gives you most of the advantages of proper data typing, but all you have to do to get them is flip a switch – no pesky forethought necessary. And that’s why it’s terrible. Because for every person out there who designs their databases and data types with care, there are many more who aren’t interested, don’t care, or don’t even realize it’s an option. Features like row compression that mask these issues aren’t going to interest anyone in solving them the right way.

So while row compression is a wonderful tool and can do amazing things when used properly, don’t forget it’s also an enabler.

Jan 312014
 

The ability to keep track of what’s being executed on a SQL Server instance, or to create a read-only copy of a database at a specific point in time can be very valuable if required by the solution you are developing. But a lesser-known fact about SQL Auditing and Database Snapshots is that not only do they work well on their own, but they also play nicely together.

The Setup

Let’s say you work for that awesome company AdventureWorks, which embraces open data so much that you can download their corporate database on CodePlex here. That database contains some super-sensitive payroll data, and they’d like to keep track of everyone who has accessed it. To do this, you set up SQL Auditing to capture all SELECT operations on the HumanResources.EmployeePayHistory table and log them to a file.

-- create a server audit
USE [master]
GO

CREATE SERVER AUDIT [AWAudit]
TO FILE (
FILEPATH = N'C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012\MSSQL\DATA')
WITH (QUEUE_DELAY = 1000, ON_FAILURE = CONTINUE);
GO

-- * in a production system you'd probably want a more stringent ON_FAILURE
--   value than "continue", but for demonstration purposes it's fine

ALTER SERVER AUDIT [AWAudit] WITH (state = ON);

-- create a database audit specification
USE [AdventureWorks2012]
GO

CREATE DATABASE AUDIT SPECIFICATION [EmployeePayAuditSpec]
FOR SERVER AUDIT [AWAudit]
ADD (SELECT ON OBJECT::[HumanResources].[EmployeePayHistory] BY [public])
GO

ALTER DATABASE AUDIT SPECIFICATION [EmployeePayAuditSpec] WITH (STATE = ON);

So now anytime anyone selects from that table, there will be a record of it:

-- select from the table being audited
SELECT TOP (100) * FROM HumanResources.EmployeePayHistory;

-- query the audit file
SELECT
	DATEADD(hh, DATEDIFF(hh, GETUTCDATE(), CURRENT_TIMESTAMP), event_time) AS EventTime,
	session_server_principal_name,
	database_name,
	statement
FROM sys.fn_get_audit_file('C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012\MSSQL\DATA\*',NULL,NULL);

So far, so good. Now let’s continue with our story and say that management wants to be able to run reports throughout the workday that show the state of the database at 6:00 am. You find the appropriate solution to be a database snapshot, so you setup a job to create a snapshot daily at 6:00 am, and another one to drop it late in the evening. All the reports can then query the snapshot and get the exact state of the database as it was at 6:00 that morning.

-- create snapshot
CREATE DATABASE AdventureWorks2012_Snap ON
(NAME = AdventureWorks2012_Data,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012\MSSQL\DATA\AW2012.ss')
AS SNAPSHOT OF AdventureWorks2012;

But what about that audit? Since a snapshot contains absolutely everything in the database at the moment it was created, it will have all that sensitive pay history data as well. How can you audit that?

The Free Lunch

The answer is that you don’t have to worry. A snapshot contains everything in the database at the moment it was created, and that doesn’t just apply to data – that goes for settings too, including audits. If someone was to query an audited table in the snapshot, this will be recorded by the audit just like on the base database. You’ll be able to tell the query was run on the snapshot by looking at the database name column:

-- query snapshot
USE [AdventureWorks2012_Snap]
GO

SELECT TOP (100) * FROM HumanResources.EmployeePayHistory;

-- query audit file
SELECT
	DATEADD(hh, DATEDIFF(hh, GETUTCDATE(), CURRENT_TIMESTAMP), event_time) AS EventTime,
	session_server_principal_name,
	database_name,
	statement
FROM sys.fn_get_audit_file('C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012\MSSQL\DATA\*',NULL,NULL);

But what if you don’t want the snapshot to be audited?

The Free Lunch is at Grandma’s House

I can only speak from personal experience, but whenever I go visit my grandmother, I go knowing that I will be eating whether I’m hungry or not. She loves cooking and watching her grandchildren eat, and – let’s face it – there’s just no way to say no to grandma.

Similarly, there’s no way to say no to a snapshot. However the database was configured the moment the snapshot was created will be reflected in the snapshot as well. It’s important to remember that database snapshots are read-only, so even if you try to disable the audit on a snapshot, it won’t work:

USE [AdventureWorks2012_Snap]
GO

ALTER DATABASE AUDIT SPECIFICATION [EmployeePayAuditSpec] WITH (STATE = OFF);

The only way to create a snapshot without the audit in this case would be to stop the audit, create the snapshot, and then restart the audit. This raises other issues, namely that you probably shouldn’t be stopping an audit of sensitive data like that.

Nov 042013
 

I’m very proud to have contributed a chapter to Tribal SQL, which is now available for purchase at Amazon and finer bookstores everywhere. This is the first book I’ve had any part in writing, something I never thought I would do. I always hated English and literature classes in school – if it wasn’t math, hard science, or architectural history, I wanted no part of it. I credit the SQL community and blogging for slowly getting me excited about the written word.

Getting this book from idea to print was not a sprint but a marathon. While I’m sure there was plenty going on behind the scenes that I wasn’t aware of, here’s how everything came together from my point of view.

Tribal SQL CoverHow I got Involved

The first I heard about “SQL Tribal Knowledge” was from Matt Velic’s (@mvelicblog post almost two years ago. Despite being fans of MidnightDBAs (@MidnightDBAJen and Sean McCown, I missed their blog post announcing the project. Basically they wanted to get a whole bunch of unpublished people together, have everyone write a chapter, and then assemble them into a book. Seemed like a great idea to me, so I submitted a few topic ideas, and ended up writing about data compression – a feature I’ve done lots of work with and know rather well.

Writing

Actually writing the chapter was both the hardest and easiest parts for me. Getting started was difficult because I wasn’t sure about what style of writing to use. I ended up reading through about a dozen other books not for content, but to see what tone the authors took when putting their words to paper. I ended up going with something not much different from a blog post – perhaps slightly more formal. With a style established, everything flowed easily. After about 2 months, my initial draft was complete and to my liking.

Peer Editing

After the first drafts came peer editing. All the authors picked two other chapters to read and comment on. It was a great gut-check on things like grammar, effectiveness of explanations, etc. With both reviews in-hand it was back to the drawingkeyboard for revisions.

Volunteer Editing

After peer editing there was a second round of reviews, this time from outside the author pool. Over 20 bloggers and speakers from the SQL Server community volunteered their time to read and comment on our work. Afterwards, this was followed by another round of revisions.

Publishing

Throughout the process, Jen and Sean were trying to find a publisher. There was some brief talk about self-publishing, but then Red Gate got involved and agreed to publish the book, now named “Tribal SQL”. This involved – you guessed it – another round of editing, this time by Red Gate’s Tony Davis (@TonyTheEditor). Tony was delightful to work with and I have tremendous respect for what he does. After a few rounds of back-and-forth with him, my chapter was done.

From this point on I’m sure there were a lot of things taking place that I didn’t know about, but the end result is that the book was published, and Red Gate was kind enough to hold a launch event at the PASS Summit and distribute copies. They also held a book signing for all the authors that were in attendance.

Would I do it again?

I would. Writing a chapter is a lot of work – writing an entire book is obviously much more, but I found it enjoyable and rewarding. Given the right topic and/or group of co-authors, I’m sure I could have just as much fun working on another one.

Write Book, Get Published, ?????, Profit?

HA! I should mention that all royalties from the sale of this book are being donated to Computers4Africa. Even if that weren’t the case, any money we might get would work out to a paltry rate when divided by the number of hours spent working on this project. If you ever decide to write a book, make sure you’re doing it for the right reasons. Money is not one of them.

Thank You

Finally, some thanks are in order. Thanks to Jen and Sean McCown for putting this all together, and to Matt Velic for blogging and making me aware! Thanks to all the other co-authors and editors for their time and hard work. Finally, thanks to the wonderful people at Red Gate, not only for publishing this book, but for the tremendous support they give to the SQL Server community as a whole.

Aug 062013
 

When I’m presenting on SQL Server data compression, I frequently get asked if data compression and backup compression can be used together. I keep meaning to put together a blog post with a demo so I have something to direct people towards for further reading. So here we are – let’s give it the old college try and find out!

Compression Primer

Methods for compressing data fall into 2 categories: lossy and lossless. Lossy compression reduces the size of data by strategically removing parts of it that are unlikely to be missed. This data is lost forever, hence the term lossy. Prime examples of lossy compression can be found in digital images, videos, and music. It’s very easy to remove a little color depth from an image or reduce the bitrate of a sound or video file in a way that shrinks its size without significantly changing its appearance to an observer. Lossy compression is great in these cases because our eyes and ears are far from perfect, but for things like a document or a database it’s not a good choice. Nobody would be happy if data randomly started disappearing to save space (well, maybe the DBA would be, but only if the data that’s disappearing are those copies of tables with “_old” in the name that nobody has looked at in years…) For types of data where nothing can be lost, there’s lossless compression. Lossless compression can’t remove anything, so it generally does things like build dictionaries of repeating patterns and create pointers to the dictionary when appropriate. Centralizing duplicate data in a dictionary means multiple copies no longer need to be stored, so the compressed version will be smaller. Both SQL Server data compression and backup compression use lossless compression methods.

Data Compression and Backup Compression, circa 1966.

SQL Server data compression and backup compression are completely separate features; they do very different things in different ways. At a high level, data compression reduces the size of records in memory (and on disk as well), allowing more records to fit on a data page thereby decreasing the space needed to store the data. Backup compression compresses the database as it’s being written to a backup file, which means backups will require less storage space. They build off each other well because data compression can only do so much: it can de-duplicate data on a single page, but not across multiple pages. Even with data compression enabled in page mode, its highest setting, it’s very likely that similar data will exist on multiple pages. Backup compression can pick up where data compression leaves off and compress things a bit more by eliminating those duplicates. Like Batman and Superman, they combine their powers for good, not evil. If you have the ability to compress both data and backups, it’s definitely worth your time experimenting to see if your environment benefits from using them together.

Demo Time!

Let’s take a look at how they compliment each other with a demo. I used the AdventureWorks2012 database, which is available for download on CodePlex if you don’t already have it.

The AdventureWorks databases are great for most examples, but none of the included tables are very big, and to make things more apparent I wanted to show some larger tables. Fortunately several tools to increase the size of tables have been written over the past few years. I used the script written by Jonathan Kehayias. This script creates 2 new tables: Sales.SalesOrderDetailEnlarged and Sales.SalesOrderHeaderEnlarged. Running the entire script the first time and then the INSERT block 3 additional times will bring the total size of the AdventureWorks2012 database up to about 4.5 GB.

Once our tables have been enlarged, let’s rebuild their clustered indexes and set the fill factor to 100. This will ensure that fragmentation is minimized so we know we’re comparing apples to apples when looking at sizes later.

ALTER INDEX [PK_SalesOrderDetailEnlarged_SalesOrderID_SalesOrderDetailID]
ON [Sales].[SalesOrderDetailEnlarged]
REBUILD WITH (SORT_IN_TEMPDB=ON,FILLFACTOR=100);

ALTER INDEX [PK_SalesOrderHeaderEnlarged_SalesOrderID]
ON [Sales].[SalesOrderHeaderEnlarged]
REBUILD WITH (SORT_IN_TEMPDB=ON,FILLFACTOR=100);

We’ll also want to shrink the transaction log. Enlarging tables and rebuilding indexes grew the log, and the restore portion of this demo will give a more accurate picture, especially in terms of time, if the log is as small as possible.

DBCC SHRINKFILE('AdventureWorks2012_Log',1);

Now for the data collection part. We’ll take a total of 4 backups of the AdventureWorks2012 database. For each backup we want to monitor:

– the size of the backup file
– the time the backup took
– average CPU utilization while the backup was running

The size of the backup file is easy – we can just look at the file’s properties in Windows after-the-fact to find it.

The elapsed time of the backup is also fairly easy. When you run a backup in SQL Server Management Studio, the “Messages” window will return some information about the backup, including the time it took to run:

Average CPU utilization is a little more work because SSMS doesn’t tell you about it. I use Perfmon since it’s built into Windows. When first opening Perfmon, the counter it shows by default is “% Processor Time”. This may seem like the counter you’ll want to use, but it isn’t. % Processor Time includes all operations on the processor, many of which are kernel processes we can’t control. There’s another counter called “% User Time” that will only show user processes. This still isn’t perfect; there will undoubtedly be other user processes running, but with all applications closed this counter is usually pretty close to zero.

The “Average” box will show the average CPU utilization since the counter was last cleared. For the most accurate value you will want to clear the counter immediately before starting a backup by right-clicking inside the chart and selecting “clear”.

Now to take some backups and monitor the results using the above methods. The first backup will be the default with backup compression disabled, we will enable it for the second.

BACKUP DATABASE [AdventureWorks2012]
TO DISK = '_path_\AW_NoDataCompression_NoBackupCompression.bak';

BACKUP DATABASE [AdventureWorks2012]
TO DISK = '_path_\AW_NoDataCompression_YesBackupCompression.bak'
WITH COMPRESSION;

Now let’s add data compression to the mix by enabling it on the clustered indexes of the 2 enlarged tables. We could enable it on all indexes, but these 2 represent the vast majority of the database’s size. I used page compression to get the maximum possible effect, and once again set the fill factor to 100.

ALTER INDEX PK_SalesOrderDetailEnlarged_SalesOrderID_SalesOrderDetailID
ON Sales.SalesOrderDetailEnlarged
REBUILD WITH (DATA_COMPRESSION=PAGE,SORT_IN_TEMPDB=ON,FILLFACTOR=100);

ALTER INDEX PK_SalesOrderHeaderEnlarged_SalesOrderID
ON Sales.SalesOrderHeaderEnlarged
REBUILD WITH (DATA_COMPRESSION=PAGE,SORT_IN_TEMPDB=ON,FILLFACTOR=100);

The rebuild operations necessary for compression will again grow the log, so we will shrink it again for consistency, and then run 2 more backups – with and without backup compression.

DBCC SHRINKFILE('AdventureWorks2012_Log',1);

BACKUP DATABASE [AdventureWorks2012]
TO DISK = '_path_\AW_YesDataCompression_NoBackupCompression.bak';

BACKUP DATABASE [AdventureWorks2012]
TO DISK = '_path_\AW_YesDataCompression_YesBackupCompression.bak'
WITH compression;

At this point we have 4 backup files:

-No data or backup compression
-Backup compression but not data compression
-Data compression but not backup compression
-Both data and backup compression

I also restored each of these backups and captured the restore time and average CPU for each. I won’t cover that here, but feel free to try restoring on your own.

Now that we’ve completed our trials, let’s look at what we have. Since we’re talking about compression, the size of the backup files is what we’re most interested in. Here are the results I observed:

As you can see, the backup with no compression at all is the largest. Data compression yields a smaller backup file, backup compression gives us an even smaller file, and combining both data and backup compression is slightly smaller yet. This is great news, as it shows that backup and data compression can work together to produce a result that’s smaller than either can do on their own.

As for backup and restore times, I would expect the results to vary based on the hardware being used. I wrote my backups out to an external USB hard disk and restored from there as well.

The backup and restore times correlate pretty closely with the backup sizes above. This is no surprise – the vast majority of time for backup and restore operations is spent waiting on I/O either reading the database from disk or writing the backup to disk. The less data SQL Server has to read or write due to compression, the faster the operations will complete.

Here’s the average CPU usage for the duration of the backup and restore operations. This is a bit more interesting, but still makes sense. Backups and restores that don’t involve backup compression use very little CPU; these operations are mostly I/O. Processor usage shoots up when backup compression is enabled because there’s a lot more work involved with compressing and decompressing the backup. Based on the above results, compressing a backup stream is more CPU-intensive than decompressing it for a restore. It also looks like slightly less CPU is required for backup compression of databases utilizing data compression as opposed to those that don’t. I imagine this is because the page-level data compression we used makes a big effort to reduce duplicated values, so there’s slightly less work to be done when it comes time to compress the backup. Backup compression will cause elevated CPU utilization only when backups or restores are running, while data compression uses extra CPU power throughout the course of normal DML operations, but not at time of backup.

So what’s the catch?

Everything we just did shows us that there’s an advantage to using data compression and backup compression together. The differences weren’t very significant in this case, only about 100MB for overall backup size, but I tried this with many datasets and the advantage only became more apparent as the sizes increased. It sounds like a total win-win, but the saying goes that there’s no free lunch in computing, and that holds true for compression as well.

The first catch is cost. Data compression is only available in SQL Server Enterprise Edition, and it’s not cheap.

After that, the biggest thing to look out for is CPU utilization. Both data and backup compression involve extra work for your CPU, and if your CPU utilization is already high, you may find that data and/or backup compression perform poorly.

Final Thoughts

Both backup and data compression can provide tremendous advantages in terms of performance and reduced storage space, but these advantages are highly dependent on the data they are manipulating. Be sure to take some time and test whether your environment comes out ahead by using them either individually or together. Based on what I’ve seen, it probably will, but it’s always better to make sure beforehand rather than turning something on and seeing no benefit or, even worse, a performance hit.

Jul 022013
 

I was recently extremely honored to find out that I earned the Microsoft Certified Master (MCM) certification for SQL Server 2008. For me, this is a major milestone in a long, difficult, and costly journey. I have received several questions about it already, and while I can’t answer everything, I am happy share as much of my experience as possible.

Why do I want to be a Microsoft Certified Master?

MCM LogoIf you’re considering pursuing the MCM, you really need to sit down and ask yourself why you want to do it beforehand. If you believe that becoming an MCM will lead to being showered with job offers or a massive pay increase at work, I would urge you to reconsider. I’m not saying these things can’t or won’t happen, after all the MCM is a valuable certification that very few people hold. My advice is merely to not expect it to happen. I’ve always been one to set my goals high and my expectations low, as I would much rather be pleasantly surprised if good things happen instead of being crushed if they don’t.

Earning the MCM certification was a personal goal of mine. For a long time I’ve wanted to become the best DBA I can possibly be, and in my mind becoming an MCM is an excellent way to prove to myself that I’m on the right path.

How do I become an MCM?

The first step is to earn both of the prerequisite certifications: MCITP: Database Administrator 2008 and MCITP: Database Developer 2008. For the administrator certification you must pass 70-432 and 70-450. Passing 70-433 and 70-451 will earn you the developer cert. These are just the certification prerequisites – you should also have the experience prerequisites which you can see on the Microsoft Learning website.

Once all of the prerequisites have been completed, you’ll be eligible to sit for the MCM Exams. Yes, there are two of them. They are the MCM Knowledge Exam (88-970) and MCM Lab Exam (88-971). You need to pass both to become a Microsoft Certified Master of SQL Server, and you’re not eligible to try your hand at the lab until you’ve passed the knowledge exam.

How should I prepare for the MCM Knowledge Exam?

Everyone has their own combination of study methods that work best for them, but here’s what I did. In terms of certification exams, I’ve always tried to study as little as possible. Anyone can cram for a test, remember things long enough to pass, and then forget it all shortly thereafter. I’ve said for a while now that if you really want to see how much you know, don’t study at all and go take the test. Or if you feel compelled to study, only review specific topics you feel you need to brush up on. I recommend these methods for the MCTS/MCITP (SQL Server 2008) and MCSA/MCSE (SQL Server 2012) without hesitation.

All that being said, the MCM is a different beast entirely. Not only is it much more difficult, but more expensive as well. Most of the prerequisite exams are in the $125-$150 range, while the MCM Knowledge Exam is $500 and the lab exam is $2000.

For the MCM Knowledge Exam I think you should definitely do some reviewing, but once you feel like you’re ready, you should try taking it. Sure it’s difficult, but not impossible. I’ll be the first to admit that it’s not cheap, but there’s no better way to find out what’s on the test than to take it. If you don’t pass, you’ll have a much better idea of what to prepare for the second time around.

What is the MCM Knowledge Exam like?

I’d love to tell you about the excellent questions you’ll find on this test, but I’m not about to violate the integrity of the exam or the NDA I agreed to. What I can do though is tell you about my experiences taking the test.

I took the knowledge exam at the Prometric Testing Center in downtown Chicago back in December of 2011, which I believe is (or was at the time) the only location in Illinois offering the test. Administratively, probably the only difference between the knowledge exam and any of the other exams I’ve taken there was the security: there was more of it. Before starting the test I had to prove my pockets were empty by pulling them inside out – this seems to be the standard for all tests I’ve taken. In addition to that, this time I was also wanded with a metal detector and my fingerprints were captured via computer.

The test itself was multiple-choice like all the prerequisite exams, but it was longer and the questions were more difficult. It did an excellent job of making me think, and was very much in line with what I was expecting.

How should I prepare for the MCM Lab Exam?

Again, different methods work for different people, but in this case I believe that regardless of your skill level, you should be preparing heavily for the lab exam.

The first thing I did to prepare started long before the MCM was in my sights, and that was reading blogs and whitepapers. SQLskills.com has an excellent list of recommended readings, and I was pleasantly surprised to find that I had already read a number of them in the process of satisfying my curiosity. (I re-read them all again in preparation for the lab).

After that, I watched all of the MCM Readiness Videos. They’re a tremendous resource, covering a lot of topics and demonstrating many of their basic facets. That being said, watching the videos alone will not prepare you for the exam. They even warn you of this at the beginning of each video.

Then came practice, practice, practice. The lab exam isn’t just knowing about features, it’s about being able to implement them. I made sure I was familiar with the T-SQL syntax and (when applicable) the graphical methods for implementing all of the features covered in the videos and reading. I went through this several times to make sure I was good and comfortable.

But that’s still not enough, because being familiar with implementing features in isolation pretty much guarantees an ideal environment for doing so. Is your production environment ideal? I know none that I’ve worked in have been. In fact, I’ve found plenty of surprises that have popped up – perhaps things done by my predecessor that have long-been forgotten. The MCM lab models situations you’re likely to encounter in the real world.

The MCM lab is open book. You are free to use the copy of Books Online (BOL) provided in the testing environment to answer any questions you might have. This can be a huge help when used properly, but it’s important to realize that the more time you spend looking through BOL, the less time you have to solve problems. Getting familiar with the installed version of BOL ahead of time may prove valuable when taking the test.

SQL Server has so many features and material to cover that there’s no way to ever know it all. My advice is to do all the above, and when you feel ready, jump in and try the lab. If you pass, awesome! If not, just like the knowledge exam, you’ll have a much better idea of the type of questions it contains and how you can better prepare for another attempt.

If at first you don’t succeed…

Speaking of “another attempt”, heh, I’ll come right out and say it: I didn’t pass the lab exam until my third try. I’m not ashamed either – it’s hard! To protect the integrity of the exam, the results you get back offer very little information. There’s an overall percentage of the passing score, and then it’s broken down across a few categories from there.

I don’t know what the score necessary to pass is – they don’t tell us, but let’s just say that it’s 70% for sake of argument. On my first attempt I had 72% of a passing score, so that would equate to 50.4% overall. But again, I really don’t know what a passing score is, I’m just pretending it’s 70%. All I know was that I was 72% on my way to passing and didn’t have much studying to go. I knew I did much better on my second try, but still came up short with 91% of a passing score, or 63.7% overall based on my assumption. The third time was the charm for me.

Microsoft gives out no information on this that I know of, but I would have to imagine that a majority of people don’t pass on their first attempt. But I don’t think that’s a bad thing. The MCM has much more in common with a marathon than a sprint.

What is the MCM Lab Exam like?

I’ll describe the lab exam using 3 words: great, fair, and hard. As a fellow MCM told me, “that test is no joke” and I couldn’t agree more.

It’s great because the questions are very well thought out and do an excellent job of testing your skills. Not just skills for accomplishing objectives, but also things like troubleshooting and finding creative solutions that might not follow established best practices.

It’s fair because while it asks great questions, I didn’t feel any of them were too outrageous. It was clear to me that I did a good job of preparing when I was not totally clueless about any question it asked. The scenarios are based on things you can reasonably expect to encounter throughout several years of real-world experience. I never got the impression that any of the questions were overtly tricky or written with sinister intent.

It’s hard not just because the questions are great, but because it comes with a time limit. If I had a week to do all the tasks in the exam, I would have aced it no problem. But you don’t get a week. In fact, you don’t even get a standard 8-hour workday. Your time allotment also includes any breaks you might need. Need to use the restroom? That’s fine, go as much as you want, however the clock will not stop. After skill, time management is by far the most important aspect of taking the test.

Testing Environment

My third attempt at the lab exam was done remotely from my house. This is an excellent option, because you get to take the test in the comfort of your own home. You can also have a drink or a snack while working, and should you need to take a bio break, you won’t incur the extra delay of going through the testing center’s security checks before resuming.

My first two attempts taking the lab exam were done at the testing center, but you won’t have that option anymore. As of January 1, 2013, the MCM Lab Exam is only offered over remote connections. I think this is a great change that will make life easier for those attempting the test.

It’s important to remember that testing centers cater to LOTS of different types of tests and many are not IT-related. I’ve seen people taking exams for accounting as well as medical certifications. My wife has also taken tests there for certs in her field. From my experience, testing centers are usually setup for exams that involve multiple-choice questions (lots of mouse clicking) or essays (lots of typing.) These tasks can be accomplished reasonably well on the provided 17″ monitor. The MCM Lab doesn’t really fit either of those scenarios though, as you are actually carrying out DBA tasks in SQL Server Management Studio. While it’s totally possible to get the job done on a smaller monitor, it’s no secret that things get easier with a little more screen real estate. At home, you’re already familiar with the dimensions of your environment, so there’s one less thing to worry about.

Do You Have Any Tips for Taking the Lab Exam?

As I said above, time management is very important. As soon as the test starts, do some quick math and divide the number of minutes you have by the number of questions there are. This will give you a good idea of how many minutes on average you can dedicate to each question. Try to stick to this, as it’s very easy to get sucked into spending too much time on individual questions.

Read through everything! At the Boy Scout camp I went to, individual patrols were responsible for cooking all their own meals. Recipe sheets we would get from the commissary always said something like “read through all of these instructions twice before you start cooking” as the first step. Just like cooking at scout camp, read through the entire test before you start working on problems (but only do it once – the clock is ticking!) Make notes about what each question is asking (scratch paper is allowed). Start working on the questions you think you’ll be able to correctly solve the fastest. After completing everything you’re comfortable with you will have the rest of the time to focus on questions requiring more work or that you aren’t so sure about.

How Long Does It Take To Get Results?

Waiting is not fun, especially when you’re waiting on the results of tests that you’ve spent so long preparing for. After completing the lab exam, there used to be a 30 day delay before receiving your results. This upset many people and didn’t seem to serve much purpose. Fortunately the MCM Team listened and changed this policy back in March. You can now expect to receive lab exam results much more quickly.

As the blog post says, this does not apply to the knowledge exam, which is scored by Prometric apparently around the 15th of the month. It can take up to 30 days to receive scores for the knowledge exam, but it may be quicker depending on when you take it.

Good Luck!

To anyone out there who is working on the MCM, I hope you found this helpful, and best of luck to you!