Yes, I know they cost extra. And no, I am not blindly playing PASS marketing cheerleader here. There’s actually some really solid reasons to purchase the session recordings:
PASS Summit has many wonderful sessions. Just check out the schedule, there’s a ton of great stuff on there. You will encounter time slots where there are multiple sessions you want to attend and you’ll have to pick one and miss the other(s). With recordings of every session, this is no longer a problem.
Have you ever had one of those “aha!” moments midway through a session, and now wish you could re-watch it all over again from the beginning? You can do that with a recording.
Fatigue is a very real problem at conferences. After spending multiple hours in sessions, everyone is bound to hit the point where their brain is full. There is zero shame in this – it happens to me too. When you have session recordings, you can watch or re-watch any session at a time when you are fully awake, aware, and caffeinated.
There may be a time where you’re better off skipping a session or two. Don’t forget, PASS Summit is about more than just sessions, it’s about networking and making connections with people. That’s why everyone gets together in a convention center instead of this just being an online conference. I’ve blogged about this before, but there are absolutely times where I go network with people and vendors instead of attending sessions. Having recordings to watch later makes the decision to do this very easy.
Perhaps your manager wishes they could send more people to PASS Summit, but there’s not enough room in the budget. Explain to them that if they were to purchase a full set of conference recordings just a little bit extra, the entire team would be able to benefit from your attending. (I once was on a team where we ate lunch together while watching conference videos one day per week. It was great for all of us!)
So while I realize that PASS Summit session recordings cost extra, in my opinion they help bring even more value to the conference. I think they’re worth it.
I’m very happy to be returning to Boston this weekend to speak at SQL Saturday Boston 2018. I last attended a few years ago and have been wanting to get back there ever since, so I’m happy the cards fell into place this time around!
The organizers in Boston have put together an amazing schedule for the day that I’m really proud to be a part of. Check it out – they really have some top talent arranged on a wide variety of topics.
As for myself, I’ll be presenting a classic session that I feel will always be relevant: backup and restore performance tuning. Yes, it really is possible to tune backup and restore jobs just like queries! Even if backup performance isn’t necessarily important to your organization, I guarantee you that restore performance will be during an outage. If this sounds interesting to you, attend my demo-intensive session to see how these techniques can help you to save the day when seconds count!
To register for SQL Saturday Boston 2018, click here. I hope to see you there!
It dawned upon me the other day that this will be my seventh year attending PASS Summit, and I’m pretty fortunate to have been able to attend so many times. Over the years, I have also acquired some knowledge about the event and how to prepare for it that may be helpful for others. So this year, I hope to share a few tips which I think may be most helpful.
If you are looking to begin your PASS Summit experience a little early, head a few hours South of Seattle to Portland, Oregon. SQL Saturday Portland is a fantastic event that takes place the weekend before PASS Summit begins.
SQL Saturday Portland, Oregon
In many ways, SQL Saturday Portland is a wonderful “preview” of PASS Summit sessions. You will find many PASS Summit speakers love to attend this event before heading up to Seattle. Some of them even deliver the very same talks they’ll be doing at Summit. Not only can you enjoy PASS Summit-quality talks for free in Portland, but you can also do so without the huge crowds. It’s a relaxed and fun environment, and a very well-run SQL Saturday. I’ve been attending for years and I can’t say enough good things about it! If this sounds awesome and you’d like to attend, you can register here!
“The Reggie Deluxe”
Aside from their SQL Saturday, Portland is an amazing city in general. It has plenty of great dining options, lots of interesting places to explore, and excellent public transit to get you everywhere you want to be. A little internet searching will get you lots of ideas of where to visit, but here’s a few of my favorites that might not appear on those lists.
Portland is very much a breakfast town, and Pine State Biscuits has the best biscuit-based menu I’ve ever seen, or tasted. Their biscuits and gravy is life-altering, as is their fried chicken. Better yet, you can combine the two into a sandwich they call the “Reggie”. This is one place I make sure to visit every time I’m in Portland, period.
You know what goes great after breakfast? Dessert! Once you’ve worked off that amazing breakfast, head to Salt & Straw for some out-of-this-world ice cream. I’ve never been to an ice cream parlor that had a line out the door before, but they did. And once I tried their ice cream, I understood why. Salt & Straw has a rotating menu with some very “interesting” flavors each month, as well as a bunch of classic flavors as well. My personal favorite: Freckled Woodblock Chocolate.
Finally, Portland has no shortage of hotels, but the one I’ve come to love over several years of staying there is the Courtyard Marriott Portland City Center. They have excellent prices, a great location downtown, and Portland’s light rail stops right out front. This hotel has become a favorite among SQL Saturday attendees, so you’ll be sure to run into #sqlfamily members there. It’s also a walkable distance from Portland Union Station, which is the starting point for my next tip…
What’s the best way to get from Portland to Seattle? On a train with all your friends, of course! For several years now, SQLTrain has been doing just that. What started as a few people taking the scenic four hour Amtrak ride from Portland to PASS Summit has turned into a literal party on a train, with an entire car reserved. Enjoy great company and conversation with #sqlfamily, see old friends, make new ones, oh and did I mention there’s doughnuts? SQLTrain prides itself on offering a wide variety of doughnuts from around the Portland area, which is the marketing department-approved way of saying WEBRINGTONSOFDOUGHNUTS!!!
In all seriousness though, SQLTrain is a wonderful time and far cheaper (and more fun!) than renting a car or flying to Seattle. SQL Train is open to anyone looking to get to PASS Summit from Portland, there is no requirement to attend SQL Saturday Portland to ride the train with us.
Oh, and one more thing: SQL Train is even bigger this year, as it has expanded to not one but TWO private train cars. If you would like to ride the SQLTrain, some spots are still available. Register here before they’re all gone!
I hope to see you at SQL Saturday Oregon, or on the SQLTrain, or at PASS Summit, and I’ll have more tips in the coming weeks!
…Except I won’t exactly be talking about SQL Server. Instead I will be speaking about Azure Cosmos DB, a relatively new cloud offering that, while different from SQL Server, is quickly proving to be extremely important in the field of data management. So what exactly is Azure Cosmos DB, and why does it matter? This session, “Select Stars: A SQL DBA’s Introduction to Azure Cosmos DB” will cover the basics of the product, how it works, and what it can do for your organization. You will learn how it differs from SQL Server and Azure SQL Database, what its strengths are, and how to leverage them.
We will also discuss Azure Cosmos DB’s partitioning, distribution, and consistency methods to gain an understanding of how they contribute to its unprecedented scalability. Finally we will demonstrate how to provision, connect to, and query Azure Cosmos DB. If you’re wondering what Azure Cosmos DB is and why you should care, attend this session and learn why Azure Cosmos DB is an out-of-this-world tool you’ll want in your data toolbox!
I am not a fan of triggers. While they represent a very simplistic construct of “if something happens, do something else”, I’ve yet to encounter a situation where they weren’t more trouble than they were worth. In my opinion, their biggest flaw is that they’re very easy to forget about, and then tend to pop up and create problems when least expected. I avoid them like the plague, and happily tell my clients to do the same. Unfortunately I still have to deal with them frequently because they are commonly used in vendor applications. This scenario is exactly what brought about this story.
I was helping an organization add high availability to their environment by deploying Availability Groups. This particular server hosted several different third-party applications, and the decision was made to give each application its own AG. The AG deployment was successful and everything seemed fine until we started testing failovers. At this point we noticed that one of the applications was using a server-level DDL trigger, which was written roughly like this:
Bad DDL Trigger
Very simply, this trigger collects information about DDL events via the EVENTDATA() function and saves them to the application’s database, AppDB, with a stored procedure called
dbo.AddEvents. This might not seem to bad at first, but there’s a few problems with it which can cause major issues.
First, the scope. While the application that deployed this trigger has its own database, AppDB, this trigger is firing for events on the entire server, which is what the
ON ALL SERVER line means. Any qualifying event on this server, even if it pertains to another application with a separate database, will be written into this database. And what is a “qualifying event”? Literally any DDL statement. The line
AFTER DDL_EVENTS specifies the very top of the event hierarchy used by DDL triggers.
So to recap on scope, this application is capturing all DDL statements on the entire server and saving a copy for itself. This application is seeing (and recording) plenty of events that it has no need to see. If this were a healthcare application or a system that dealt with PII it would be a legal nightmare, but fortunately it isn’t.
However, scope isn’t the only issue.
Let’s look at error handling. This trigger is using TRY/CATCH syntax for error handling. If an error occurs and the
CATCH block is entered, it rolls back. This may be a good practice, depending on the intent. Don’t forget, triggers execute in the context of the transaction that triggered them. If this trigger is fired by an event, such as a table being created, and the trigger is unable to capture the event, it will issue a
ROLLBACK, undoing the table creation. If the goal is to make sure that every operation is captured with nothing at all sneaking by, this is a reasonable choice. But what happens if the rollback is unintended?
Remember, the original intent here was to implement Availability Groups, and there were multiple AGs, one for each application. Consider a scenario where two nodes exist, which we will call Node 1 and Node 2.
Assume all Availability Groups are running with Node 1 as primary. Then the AG containing this application’s database “AppDB” fails over to Node 2, but all other AGs stay behind on Node 1. That DDL trigger still exists on Node 1, but the stored procedure in AppDB cannot execute there since writes can only occur on the primary node for AppDB, which is Node 2. At this point, the trigger causes an error, and rolls back.
Any DDL operation taking place on Node 1 for any database will fail in this case. And that’s just not acceptable. Instead of not playing nicely in the sandbox, this application is taking the entire sandbox with it to another server, and nobody else gets to play at all.
There are several possible fixes for this situation:
Change the scope of the DDL trigger to only affect the application’s database and not the entire server.
Remove the DDL trigger entirely and replace with a different technology, such as SQL Server Audit.
In the end, I contacted the vendor and explained the issue. They stated that their product does support Availability Groups, and agreed that this trigger was inherently incompatible with them. Their solution was “Go ahead and remove that trigger. We really only use it for internal testing anyway, it’s not necessary in customer environments.”
And with that, the problem went away. If only all my issues could be solved that easily!
A big thank you to Steve Jones for hosting this month’s T-SQL Tuesday! If you’d like to help host a T-SQL Tuesday, contact Steve.