Recently I attended Reg-Gate’s SQL in the City event in Seattle, WA. I was in Seattle for the annual PASS Summit, you can read about my Summit adventures here. While at RedGate’s event, I attended a session that called on SQL Server Data Tools (SSDT) users. RedGate wanted to get a better handle on what pain points we had in SSDT with respect to source control. I use Team Foundation Server (TFS) as my source control product and it ties in very nicely with SSDT.
After this discussion, Bill Fellows (B | T), asked if I would be willing to blog or speak about my own build and deploy process for databases. Well, given that I am so NOT a speaker type, the seed for this blog post was planted.
I will not be diving into technical detail on how to do all these things (that would be a very large book), but more giving an overview of the features of TFS and why I think they are important and how I use them. Think of this as the 50,000 foot overview of my build and deploy process.
Integration with SSDT (Visual Studio)
Since I don’t want to use umpteen gazillion tools for development, I want my source control to integrate seamlessly with Visual Studio. TFS does this better than any other product I’ve used, probably because they are both Microsoft products. This way I don’t have to use multiple IDEs to work on SSIS, SSRS, SSAS and database development projects. I have one tool with the same source control experience for all.
Design your Branches Accordingly
TFS uses the Branch paradigm to split code out for different development efforts. I like this paradigm, it’s easy to visualize and makes sense to me. Designing your branches is probably the most important part of the source control process. Think of this as your data model, if you get this wrong, you will pay dearly for it in the end. Think about how your organization is structured and how your code moves through the development process. What environments do you have: Development, QA, Staging, Hotfix, etc.? How does your code move through those environments? Is it strictly one-way or can your code move in more than one direction?
Because no matter how many times you tell your developers to do a build locally before checking in their changes, someone will inevitably forget. The last thing you want is bad code getting into your code base. Then you’re left with all your developers sitting around while changes are backed out/corrected, we all know what happens when developers sit around with idle hands. Gives me nightmares just thinking about it.
This is so important. You most likely have more than one developer working on code. You want to make sure that all those changes they are making are not stomping all over each other and breaking things. Just because developers can get their code past the gated check-in, doesn’t mean it won’t break something else. You should actually be doing this for all your environments, not just development. In a large shop I recently worked in, we scheduled our automated builds twice per day. The first one was for 3 a.m., which allowed enough time for correction before staff came in if a build failed. The second one was at lunch time. This one allowed us a “sneak peek” at the big picture before the nightly processes kicked off. While TFS does provide some default build templates, so many of us have such custom applications and database projects that you may have to learn how to write xaml, I did.
This is one of my favorite “tattle tale” features of TFS. You can set up notifications to find out when things are checked in successfully, when check-ins fail, when builds fail, all kinds of things. Use this feature. I can’t stress this enough, USE THIS FEATURE!
While TFS has some great features, some of them are a bit hard to navigate/use. This is where Power Tools comes in. It’s available freely for download from MSDN. It makes some great features just a click away, instead of having to write some obtrusive custom code to get what you want – like, who has what checked out in a branch or wild card searching or copying a query or cloning builds, etc. The list is quite extensive.
All of these things don’t really do a lot of good unless you change the default settings for source control in SSDT. One of the biggest bang for your buck settings is to automatically get code when you open a solution. By default this is not enabled, silly I know, but it’s not. The other setting is to check out objects automatically when they are edited. These two settings will make your source code life much easier.
Wrapping it up
I’m not going to lie, getting this all set up in TFS is no small effort. This is your livelihood, treat it as such. Do your research into how your company’s processes currently work and then compare them to how you want them to work. Once you have all that you can come up with a build and deploy process that works for you.
3 thoughts on “My Build and Deploy Process (as Requested by Bill Fellows)”
This is quite a bit of work and thanks for sharing. A couple of items I’m curious about. You don’t have to reply here. A separate post on each would be great.
1. Branching/merging – How do you handle branching in TFS for the databases? I’m guessing that people just load up SSDT with the branch’ed project and make changes. Are merges handled in TFS somehow? Do developers ever create a database for the branch for testing?
2. Testing – What do you use/do for database code testing. Or do you?
3. Builds – How do you “build” the database? Are there any checks made here?
All good questions. I debated on whether or not to include my branching/merging process. I guess I should have, but it’s such a huge topic, I’ll do it as a separate post.
Testing & Builds fall under the same response. I guess I now have 3 more posts to write 😉
Bill, this is all your fault!
This is really good and I hope I don’t sound negative, it’s not my intent. You’ve covered a lot of the “whats” here, but not really the “hows”.
As for TFS and SSDT…those are fine tools and make the process of notification, automation, and “gated check-ins” a lot easier. But a good database build and deploy process should be somewhat agnostic as to what that tooling is, it frankly isn’t part of a database build and deploy system. Extra points if you can swap out components and not break the actual build and deploy system.
The problem I’ve found is that people don’t really understand how a good database build and deploy should work. Most tools treat DDL as if was compiling C# code. That’s just wrong.
Here is a short list of what a good db build/deploy tool should do.
–Metadata should be preferred over DDL (this means SSDT and database projects fail immediately for me), however the final DDL should be debuggable and generated repeatably (again SSDT fails this). It’s difficult to write good DDL when you have to, say, drop all of your FKs on a table and reapply them later.
–we should be able to set up CI loops quickly and have a template to help us do that.
–the delivery of model/system/lookup data should be integrated with the schema changes, and it should be simple and repeatable
–and we should be able to customize the system to meet our needs (ie, allow some constraints to be untrusted but generate a warning).
–Extra points if you can do all of this with reduced downtime.
I have developed just such a tool over the years and many of my clients swear by it. http://www.davewentzel.com/md3
Great article and hope to hear more details about your build and deploy system.