Category Archives: Standards

Where to Store Index DDL

Recently I was asked my opinion, via Twitter, on where to store the index DDL for a single database that had multiple clients with varied usage patterns.  You can see the whole thread here.

It’s a great question and there were some interesting suggestions made.  My approach to this scenario is kind of a hybrid of all the suggestions and comments.

I’ve had to deal with this kind of thing in the past and what I found worked best is to create a single file for each client that contains the client specific DDL.  I wish I could take credit for this idea, but it wasn’t mine, it belonged to a co-worker.  At first I resisted and thought it was a bad idea.  I mean really, mixing DDL for more than one object in a single script just seemed wrong and goes against every fiber of my OCD organizational self.  But in the end, this is what worked best in our environment.

Our initial thought was to include our index DDL with the table, but use variables to name the index objects that were specific to the client.  This way the index names would never collide, but then that kind of defeated the whole purpose of different indexing strategies for different clients.  Thankfully we scrapped that idea before we implemented it.

We tried creating separate files for each table that had client specific DDL in each file.  That became a nightmare when it came time to deploy and maintain.  We had to build logic in our pre & post deployment scripts to handle that.

Then we tried separating the index DDL files out by client, so we ended up with a bazillion index DDL files for each table.  Okay, may not a bazillion, but it was a lot and it was even more of a nightmare to maintain.

We settled on the approach I mentioned earlier, one DDL file per client that held all the DDL that was specific to the client, not specific to any object.  We found it was much easier to maintain and deploy.  We defaulted each of our client specific DDL files to be NOT included in the build.  When it came time to do a build/deploy for a specific client, we would set the option to include the client specific file in the build.  We were not using continuous integration, so this may not work if that is what your shop is doing.  Or it may work with just a few tweaks to your process.  It did work for our situation and it worked well.

I don’t think there is a single correct answer to this question.  Like almost everything else in our tech world, the answer is going to be, “it depends”.  Figure out what works in your environment and then go with it.  It might take a few trial and error runs to get it right, but you’ll figure out what works best over time with a little persistence.

I’d love to hear your thoughts on this.

Policy-Based Management, How Sweet It Is

I have just recently changed employers and in doing so, I’ve changed my role from Data Architect to DBA/BI Developer.  I had been doing heads down development work for the last 4 plus years and not a lot of admin work.  Given that, I missed out on something that I think is just fantastic.  Policy-Based Management (BPM), it was a new feature with SQL Server 2008.  I spent a lot of time when I was a DBA figuring out how to monitor my servers.  I used to use a combination of home grown scripts and the SQLH2 utility.  But PBM makes a DBA’s life so much easier, especially when your shop can’t afford all the cool monitoring tools that are available today.

Basically PBM allows a DBA to manage multiple instances of SQL Server on a consistent basis.  You can use one server to create all your policies on and then export/import them on other servers.  While this is still better than the hodgepodge of scripts you had before, it’s still a bit cumbersome.  Enter Enterprise Policy Management Framework (EPM) from Codeplex.  It allows you to extend the PBM via your Central Management Server and provide some very cool SSRS reports.  Because EPM is driven by PowerShell, it can be used on SQL Server versions from 2000 to 2012.

I am so excited to be going back to a dual role of DBA/developer, and Policy-Based Management and Enterprise Policy Management Framework make it that much sweeter.

Paying it Forward

My company has recently put forth an effort to hire fresh college graduates and interns.  I think this is a great thing.  A long time ago, in a galaxy far far away, I was a fresh graduate, with no real world experience and my first employer took a big chance in hiring me.  I was really lucky with my first employer, I had a couple of fabulous mentors that taught me not just technical skills, but life skills, that I still use today.  Employers are getting in on the ground floor and it’s in their best interest to invest resources in what will hopefully be a long, successful and rewarding career.  “Paying it forward”, as it were.

As a data professional, there are some essentials will make your life so much easier down the road.  The first one is Standards and Best Practices.  Establish these early and they will become habit.  Now I could drone on forever about particular Standards and Best Practices, but that’s not my point.  My point is that if you start off your career by adhering to standards and best practices, it won’t come as a shock to you later when they are forced upon you.  You may not like the particulars of them, but it won’t feel so icky the first time you have to use them if you are used to following something.  In working with our new employees, I always take the time to point them to our standards and explain why we have them.   They are always appreciative of this, especially the explanation.

Another essential is code review.  This is a big bang for your buck.  It’s a learning experience for everyone involved.  We use an Agile Methodology in our development life cycle.  So our teams are fairly small, they usually consist of two to three general developers a technical architect and a data architect or DBA type person.  Following the Agile Methodology, anybody should be able to work on anything in the product/iteration back log.  For the general developer, they can usually write T-SQL, but most times it won’t be nearly as efficient (or dare I say pretty) as if the data architect/DBA type person wrote it.  Code review allows for the team to become familiar with a task they are not directly involved with and provides valuable feedback to the person who wrote it.  They may not be as familiar with the subtleties of the business rules or have no idea that when you use three nested select statements in the predicate you will most likely end up doing a table scan on that table with 30 million records, three times!  All they know is that their query returns and it returns the correct answer.

Take the time to “pay it forward”, it can only benefit you in the long run and the person you “paid” will be that much better for it.