Category Archives: Source Control

Where to Store Index DDL

Recently I was asked my opinion, via Twitter, on where to store the index DDL for a single database that had multiple clients with varied usage patterns.  You can see the whole thread here.

It’s a great question and there were some interesting suggestions made.  My approach to this scenario is kind of a hybrid of all the suggestions and comments.

I’ve had to deal with this kind of thing in the past and what I found worked best is to create a single file for each client that contains the client specific DDL.  I wish I could take credit for this idea, but it wasn’t mine, it belonged to a co-worker.  At first I resisted and thought it was a bad idea.  I mean really, mixing DDL for more than one object in a single script just seemed wrong and goes against every fiber of my OCD organizational self.  But in the end, this is what worked best in our environment.

Our initial thought was to include our index DDL with the table, but use variables to name the index objects that were specific to the client.  This way the index names would never collide, but then that kind of defeated the whole purpose of different indexing strategies for different clients.  Thankfully we scrapped that idea before we implemented it.

We tried creating separate files for each table that had client specific DDL in each file.  That became a nightmare when it came time to deploy and maintain.  We had to build logic in our pre & post deployment scripts to handle that.

Then we tried separating the index DDL files out by client, so we ended up with a bazillion index DDL files for each table.  Okay, may not a bazillion, but it was a lot and it was even more of a nightmare to maintain.

We settled on the approach I mentioned earlier, one DDL file per client that held all the DDL that was specific to the client, not specific to any object.  We found it was much easier to maintain and deploy.  We defaulted each of our client specific DDL files to be NOT included in the build.  When it came time to do a build/deploy for a specific client, we would set the option to include the client specific file in the build.  We were not using continuous integration, so this may not work if that is what your shop is doing.  Or it may work with just a few tweaks to your process.  It did work for our situation and it worked well.

I don’t think there is a single correct answer to this question.  Like almost everything else in our tech world, the answer is going to be, “it depends”.  Figure out what works in your environment and then go with it.  It might take a few trial and error runs to get it right, but you’ll figure out what works best over time with a little persistence.

I’d love to hear your thoughts on this.

Using the Analysis Services Execute DDL Task

My grandfather used to say, “There’s more than one way to skin a cat.”  I always thought it was a disgusting saying, I mean who would ever want to “skin a cat”?!  Gross!  But the underlying meaning was clear, there’s always more than one way to do something.  And no my grandfather never actually skinned any cats, he was just a country boy from a small town in Iowa, where you quite often had to come up with creative ways to get things done.

There are often times when we get so focused on what we are doing, we forget to look around.  This happened to me a few weeks ago.  We were in the middle of a big upgrade at work, not only where we upgrading our ERP system, we were changing platforms as well (read this as:  no one got more than 3-4 hours of sleep per night for about two weeks, including myself).  We also have a BI solution that uses SQL Server Integration Services (SSIS) to pull data from our ERP system, stores it in a data warehouse on SQL Server then has a multi-dimensional SQL Server Analysis Services (SSAS) cube that sits on top of the warehouse.  Pretty common scenario.

Last part of our ETL process is to process the Dimensions and Facts for the newly loaded data.  I have always used the Analysis Services Process Task in SSIS to accomplish this.  Works great, it’s stable and easy to use.  Just click a few items and you are off and running.

My ETL code had been stable for so long, I hadn’t needed to make any changes since I upgraded from Visual Studio 2015 to Visual Studio 2017.  Little did I know that there was a bug in Visual Studio 2017/SSDT 15.5.2 that will not allow you to use the Analysis Services Process Task if your target version of SSIS is 2016.  You will get a couple of different errors depending on what you are doing.  If you are trying to modify an existing package that uses an Analysis Services Process Task and your target version of SSIS is 2016 you will get the following error:

Post71-EditExistingError

If you try to create a new package using an Analysis Services Process Task and your target version of SSIS is 2016 you will get the following error after it allows you to configure your processing tasks and you click OK:

Post71-NewPackageError

In both of these scenarios you will not be able to save the package.  So what the heck are you supposed to do?!  Here’s where my tunnel vision (and panic) sets in.  How was I supposed to get my SSAS objects processed?

I could always script out my processing tasks using SSMS and drop them in a SQL Agent job step.  But I have multiple environments and multiple cubes so each one would have to be hard coded.  Not a great idea, so scratch that.

Post71-ScriptProcesinSSMS

I could just create a new SSIS project, targeting SSIS 2014, just for the processing tasks.  (Thanks to Teo Lachev for the idea.  And yes, you can deploy SSIS 2014 targeted packages/projects to SSIS 2016.)  It’s better than my first idea.  I could parameterize all the connection managers so nothing would be hard coded, but now my code lives in two different projects.  Not ideal, but it would definitely be better.

Then Gerhard Brueckl pointed me to the Analysis Services Execute DDL Task.

Post71-ASDDLTask

I did some quick research on it here and turns out, this was exactly what I needed to “skin this cat”.  It would allow me to keep my SSAS processing packages in the same project as the rest of the ETL and I could parameterize it to my hearts content (queue beams of light from above and singing angels).  Why hadn’t I thought of this before?!  To be honest I had never paid attention to this task.  It had the term DDL in it, and I wasn’t writing DDL, so why would I ever need it, especially since I was dealing with SSAS.  Yes, I know, need I remind you about my lack of sleep and my tunnel vision.

Post71-ASDDLTaskOptions

The cool thing about the Analysis Services Execute DDL Task is that you have a choice when it comes to your source for the DDL.  I opted for the variable because I needed to parameterize it with the SSAS database name.

Post71-SourceTypes

I was able to script my processing tasks, dump that code into a variable that I built with expressions so that it could be parameterized.  Now one thing to keep in mind is that the XMLA that is produced for SSAS processing tasks has double quotes in it.  You will have to either a) escape the double quotes or b) create a variable for the double quote to use in the XMLA in your expression to build your variable.  I opted for b, create a variable to hold the double quote, it seemed cleaner to me.  Here’s a good post on StackOverFlow on how to do both.

As you can see below, it looks very intimidating with all the XMLA & expressions, but it creates a very pretty XMLA processing task in the end.

Post71-VariableExpression

I created one task for processing dimensions and one task for processing facts.  Mostly because I want to be able to track performance respectively, but you could create one task to do it all.

I was able to add these new packages to my SSIS project and deploy it without issue.  They’ve been running ever since with no problems.

I hope this post helps someone else break out of their tunnel vision.

PSA:  No cats were harmed in the writing of this blog post.

Visual Studio 2017 Irritants From the SSIS Developer View

I was among the many Integration Services (SSIS)/Database (DB) developers out there that cheered when Microsoft finally announced support for SSIS in Visual Studio 2017.  I mean really, VS2017 was released back on March 7, 2017, but if you wanted to develop any SSIS packages in VS2017, you had to wait for the release of SSDT on August 23, 2017.  That’s more than 5 months.  All the other kids got to play with the shiny new toy right away, but we had to wait.  I know there are a ton of reasons for this, but it still sucks.

That being said, I was very happy.  I could finally do ALL my development in the same tool: DB, SSAS, SSRS & SSIS.  But after only 30 days of using it, I have discovered a few things that just irritate me.  I always thought of myself as open to change, but maybe I’m not as open to change as I thought I was.

1 – ISO Is No Longer An Option

That’s right, you can’t just download an ISO and stick it on your network, you have to essentially create your own if you need to do an offline install.  As the title of this post says, this is an “Irritant”, not a deal breaker, but definitely an irritant.

2 – Start Page – It’s Baaack!

There is NO WAY to turn this annoying page off once you close a solution.  You can turn the Start Page off when you first open VS2017 (Tools | Options| Environment | Startup), but as soon as you close a solution, it shows up – ARGH!  See this thread, I’m not the only one annoyed by this behavior.  I’m a very visual person and seeing a list of projects with really long paths don’t help me.  I need my Source Control Explorer to make sure I’m opening the right solution in the right branch.

VS2017-Tools-Startup

3 – Work Items Open in Browser By Default

One of the cool things about using VS as your IDE is that it has this great add-in for source control, especially if you use VSTS, it’s almost a seamless integration (and everyone IS using some kind of source control, right?!).  You can open your list of User Stories based on iteration.  Once you have your list of User Stories, all you have to do is double click the one you want and it opens right there in your VS environment.  Well, the default behavior for where your work items has changed.  Your work items no longer open in VS, they open in a web browser.  You can change this default behavior back to opening in VS (Tools | Options | Work Items | General), but this behavior will be deprecated in the next major version of the product.  There is nothing more irritating than having to switch back and forth between applications when doing development work.  Pro Tip:  If you want folks to use Source Control, make it easy and all in ONE location!!!

VS2017-Tools-Options-WorkItems

VS2017-WorkItems-Deprecated

4 – With Each New Update, Something Breaks

Okay, this is not new behavior in VS2017, unfortunately, it’s continuing behavior from previous versions of VS. <sigh>  I was really hoping that this behavior would stop, but sadly it still persists.  For example, since SSMS is now based on the VS Shell, if I update VS, it usually horks up (yes, that is a very technical term) the settings in my SSMS.

5 – Closing Solutions Take FOREVER!

Holy cow, in previous versions of VS, all I had to do was click File | Close Solution and in a split second my solution would close.  Now, it takes forever.  I have a solution with two projects in it and when I click File | Close Solution it takes anywhere from 15-30 seconds to close (I feel really bad for those that have more than 2 projects in their solution).  Not only that, it says it’s unloading 3 projects, not the 2 that I actually have!  WTH?  According to this thread, the problem has been fixed, but I can confirm that it most definitely has NOT been fixed as of version 15.3.5.  Maybe they only fixed it for Non-SSDT project types?

VS2017-UnloadSolution

Okay, my rant is over, nothing to see here, move along.

Scripts – A Living History

As a DBA, I have a collection of scripts that I use for anything from auto-fixing logins to seeing who has the DAC.  Since I’ve been a DBA for a while (yeah, a while, we’ll go with that) I have quite the collection of scripts and I am constantly adding to it. 

In the Prehistoric days of dinosaurs and floppy disks, I used to keep a backup copy of them on a 3 1/2" floppy.  This was convenient and portable, so if I ever changed jobs, I could take my scripts with me. 

Then we entered the Golden Age of writable CDs and I could burn them to a data CD.  Still portable but a little more durable than a 3 1/2" floppy, I didn’t have to worry about keeping my CD away from magnets.

Carrying a CD around may have been more durable, but it certainly wasn’t more convenient.  Enter the Renaissance Age of USB/Thumb drives.  Holy Cow, I could copy all my scripts to a USB drive and fit it in my pocket, I could take it with me everywhere I went, now that’s convenient!

Enter the Industrial Age and we got smarter about how we did things.  Hello Google Drive.  No more having to carry around anything but lint in my pockets.  As long as I had access to the internet, I had access to my scripts.  Even if the internet were temporarily unavailable, I could still access the scripts on my local hard drive. 

But then a funny thing happened, I modified one of my scripts to accommodate a specific version of SQL Server and accidentally overwrite the original file.  We’ve all been there, that moment when you click the Save button instead of Save As.  All the expletives rumbling around in your head because now you have to remember what it was before you overwrote it.  Enter the Space Age, the days of redundancy checks and fail safes.  We in the development community call it source control.  When Microsoft announced it’s TFS Online offering three years ago, I couldn’t put my scripts in the the cloud fast enough.  Of course the name has changed, but the idea remains the same, source control in the cloud.  The great thing is that you can actually do it for free (for up to five people).

Will you learn from history and protect your scripts or are you doomed to repeat it? 

One Tool to Rule Them All – Almost

There we so many cool announcements at the PASS Summit this year, but one of my favorites was the “One Tool to Rule Them All”. The SQL Server Data Tools (SSDT) teams and the Visual Studio (VS) team have finally teamed up together to give us one tool to do all our development work for Databases, SSIS, SSAS & SSRS. No more will we have to install Visual Studio Shell, SSDT, SSDT-BI and for those source control minded folks (which should be everyone!) that use Team Foundation Server (TFS), Team Explorer. For SQL Server 2016 development we can do one install of Visual Studio 2015 and call it a day, well, almost.

SSDT Install

I was so excited when I got back from Summit, I downloaded SSDT (CTP3) from here. I was so happy to see the install screen.

SSDT Install Screen

There they were, in all their glory, all the SQL Server project types that I needed. No more having to download multiple install files. Oh happy day!

After the install completed, I was a bit dismayed to discover that it took 3GB of disk space to do this install but I guess that’s par for the course any more.

Visual Studio Install

Next I wanted to see if you got all these same project types with an install of Visual Studio. They announced at Summit that “SSDT” would now be “included” with Visual Studio. So I went out and downloaded Visual Studio (CTP3, Community Edition, i.e., free) from here. And look what shows up on the install features list, there it is in black and white, Microsoft SQL Server Data Tools, almost too good to be true.

Visual Studio Features

Well, we all know that if something seems too good to be true, then it usually is. This is no exception.  Let’s see if you can pick out the reason for my disappointment in the picture below.

Visual Studio Project Types

That’s right, the only SQL Server project types that are installed with Visual Studio are database projects. No SSIS, no SSAS & no SSRS. That was very disappointing. Also note that it installed the templates for Visual C#, Visual Basic, etc., when the only feature that I requested to be installed was SQL Server Data Tools. I guess that’s why this install took 5GB of disk space as opposed to the 3GB that SSDT required.

The good thing about the new Visual Studio is that if you use TFS as your source control, you no longer have to download the separate TFS Team Explorer, it is now built in to Visual Studio. No additional installs are required.

Visual Studio Team Menu

Right “out of the box”, you get the Team menu item. However, this is NOT included in the SSDT install. I guess someone thinks we don’t really need to source control our SQL Server projects <sigh>.

Almost One Tool

Because I use TFS as my source control, I still have to do two installs, SSDT to get ALL the SQL Server project types AND Visual Studio so I can add all my SQL Server project types to source control.

This is definitely better than what we have to do now if we are doing development work prior to SQL Server 2016, but it’s not “One Tool to Rule Them All” yet. I’m hoping that since this is a CTP, the final products will contain “all the things”, but I certainly won’t hold my breath.

Now I’m off to test if they’ve overcome the issue of database projects playing nicely with SSAS projects. For those that use the multidimensional model with partitioning, you know exactly what I’m talking about. I’ll keep you posted with my results.

My Build and Deploy Process (as Requested by Bill Fellows)

Recently I attended Reg-Gate’s SQL in the City event in Seattle, WA. I was in Seattle for the annual PASS Summit, you can read about my Summit adventures here. While at RedGate’s event, I attended a session that called on SQL Server Data Tools (SSDT) users. RedGate wanted to get a better handle on what pain points we had in SSDT with respect to source control. I use Team Foundation Server (TFS) as my source control product and it ties in very nicely with SSDT.

After this discussion, Bill Fellows (B | T), asked if I would be willing to blog or speak about my own build and deploy process for databases. Well, given that I am so NOT a speaker type, the seed for this blog post was planted.

I will not be diving into technical detail on how to do all these things (that would be a very large book), but more giving an overview of the features of TFS and why I think they are important and how I use them. Think of this as the 50,000 foot overview of my build and deploy process.

Integration with SSDT (Visual Studio)

Since I don’t want to use umpteen gazillion tools for development, I want my source control to integrate seamlessly with Visual Studio. TFS does this better than any other product I’ve used, probably because they are both Microsoft products. This way I don’t have to use multiple IDEs to work on SSIS, SSRS, SSAS and database development projects. I have one tool with the same source control experience for all.

Design your Branches Accordingly

TFS uses the Branch paradigm to split code out for different development efforts. I like this paradigm, it’s easy to visualize and makes sense to me. Designing your branches is probably the most important part of the source control process. Think of this as your data model, if you get this wrong, you will pay dearly for it in the end. Think about how your organization is structured and how your code moves through the development process. What environments do you have: Development, QA, Staging, Hotfix, etc.? How does your code move through those environments? Is it strictly one-way or can your code move in more than one direction?

Gated Check-ins

Because no matter how many times you tell your developers to do a build locally before checking in their changes, someone will inevitably forget. The last thing you want is bad code getting into your code base. Then you’re left with all your developers sitting around while changes are backed out/corrected, we all know what happens when developers sit around with idle hands. Gives me nightmares just thinking about it.

Automated Builds

This is so important. You most likely have more than one developer working on code. You want to make sure that all those changes they are making are not stomping all over each other and breaking things. Just because developers can get their code past the gated check-in, doesn’t mean it won’t break something else. You should actually be doing this for all your environments, not just development. In a large shop I recently worked in, we scheduled our automated builds twice per day. The first one was for 3 a.m., which allowed enough time for correction before staff came in if a build failed. The second one was at lunch time. This one allowed us a “sneak peek” at the big picture before the nightly processes kicked off. While TFS does provide some default build templates, so many of us have such custom applications and database projects that you may have to learn how to write xaml, I did.

Build Notifications

This is one of my favorite “tattle tale” features of TFS. You can set up notifications to find out when things are checked in successfully, when check-ins fail, when builds fail, all kinds of things. Use this feature. I can’t stress this enough, USE THIS FEATURE!

Power Tools

While TFS has some great features, some of them are a bit hard to navigate/use. This is where Power Tools comes in. It’s available freely for download from MSDN. It makes some great features just a click away, instead of having to write some obtrusive custom code to get what you want – like, who has what checked out in a branch or wild card searching or copying a query or cloning builds, etc.  The list is quite extensive.

Default Settings

All of these things don’t really do a lot of good unless you change the default settings for source control in SSDT. One of the biggest bang for your buck settings is to automatically get code when you open a solution. By default this is not enabled, silly I know, but it’s not. The other setting is to check out objects automatically when they are edited. These two settings will make your source code life much easier.

Wrapping it up

I’m not going to lie, getting this all set up in TFS is no small effort. This is your livelihood, treat it as such. Do your research into how your company’s processes currently work and then compare them to how you want them to work. Once you have all that you can come up with a build and deploy process that works for you.

Good luck!