Tag Archives: SQL Server

Where to Store Index DDL

Recently I was asked my opinion, via Twitter, on where to store the index DDL for a single database that had multiple clients with varied usage patterns.  You can see the whole thread here.

It’s a great question and there were some interesting suggestions made.  My approach to this scenario is kind of a hybrid of all the suggestions and comments.

I’ve had to deal with this kind of thing in the past and what I found worked best is to create a single file for each client that contains the client specific DDL.  I wish I could take credit for this idea, but it wasn’t mine, it belonged to a co-worker.  At first I resisted and thought it was a bad idea.  I mean really, mixing DDL for more than one object in a single script just seemed wrong and goes against every fiber of my OCD organizational self.  But in the end, this is what worked best in our environment.

Our initial thought was to include our index DDL with the table, but use variables to name the index objects that were specific to the client.  This way the index names would never collide, but then that kind of defeated the whole purpose of different indexing strategies for different clients.  Thankfully we scrapped that idea before we implemented it.

We tried creating separate files for each table that had client specific DDL in each file.  That became a nightmare when it came time to deploy and maintain.  We had to build logic in our pre & post deployment scripts to handle that.

Then we tried separating the index DDL files out by client, so we ended up with a bazillion index DDL files for each table.  Okay, may not a bazillion, but it was a lot and it was even more of a nightmare to maintain.

We settled on the approach I mentioned earlier, one DDL file per client that held all the DDL that was specific to the client, not specific to any object.  We found it was much easier to maintain and deploy.  We defaulted each of our client specific DDL files to be NOT included in the build.  When it came time to do a build/deploy for a specific client, we would set the option to include the client specific file in the build.  We were not using continuous integration, so this may not work if that is what your shop is doing.  Or it may work with just a few tweaks to your process.  It did work for our situation and it worked well.

I don’t think there is a single correct answer to this question.  Like almost everything else in our tech world, the answer is going to be, “it depends”.  Figure out what works in your environment and then go with it.  It might take a few trial and error runs to get it right, but you’ll figure out what works best over time with a little persistence.

I’d love to hear your thoughts on this.

New Session: Data Types Do Matter

I started speaking on a semi-regular basis this year after many years of trepidation.  You can read about my first speaking experience here.  It’s three years later and I have discovered that I love speaking.  I always wanted to be a teacher when I was younger, but it didn’t work out that way.  I have taught swim lessons off and on for over 20 years as well as coaching a swim team, things I am very passionate about, so speaking about SQL Server seemed like the next step.

I am mainly a BI developer now, using the Microsoft stack, but my roots are in DBA land.  There is one thing that I have run across time and again doing my BI development work and it’s something that surprises me every time.  People constantly using the wrong data type for their data.  So I decided people are either lazy or they aren’t educated on the pitfalls of using the incorrect data type.  Hence my new session was born, Data Types Do Matter.

I’m test driving it at the January 30, 2018, Triad SQL User Group meeting.  Come on out and say, “Hi”, if you’re free.  I’d love to see you and get your feedback on this brand new session.

Import Export Wizard Mapping Files

Recently I had to copy data from our AS400 to a SQL Server database.  Normally I would whip up my fancy SQL Server Integration Services (SSIS) project and get it hammered out pretty quickly.  Unfortunately there were over 4,000 tables that I needed to pull data from, no way in HELL was I going to manually create 4,000+ packages.  Now most of my BIML friends would say, I could BIML that for you in 2 hours and if my BIML weren’t so rusty, I probably could have too.  I didn’t have to do any fancy transformations on the data, I just had to copy it.  So I decided to take the “easy way out” and use the Import Export Wizard in SQL Server Management Studio.  Shoot, all I would have to do is a few clicks and be done with it, right?  Boy was I wrong.

This post talks about the issue I ran into with SSIS Mapping Files.

We currently run DB2 on an IBM iSeries AS400 for our ERP system.  I was tasked with copying data from the AS400 to a SQL Server database for some consultants to use.  The C-Suite didn’t want to give the consultants access to our AS400, so this was the work around that was put forth and accepted (and no, no one asked me before I was “voluntold” for the task).  Since this would essentially be a “one-time” thing, I chose to use the Import Export Wizard, but I would save the package just in case they wanted this process repeated.

I fired up the Import Export Wizard and selected my source, IBM DB2 for i IBMDA400 OLE DB Provider.  Now before you can select this data source you must install the IBM DB2 drivers.  You can find out more about them here, unfortunately you have to have a maintenance contract and an account with IBM before you can download them <sigh>.  It’s a straight forward install once you have the installation package.

EZ peazy, lemon squeezy.

I selected my destination, SQL Server Native Client 11.0, of course!

 

On a roll now, should only be another 30 seconds and I can put this project to bed.  Well, we all know that’s not what happened, otherwise you wouldn’t be reading this.

When I clicked on the Edit mappings button in the wizard to make sure all the datatypes had been mapped successfully, I got “<unknown type>” for every single column.  WTH?!  This doesn’t happen when I’m working in Visual Studio with my SSIS projects.  After some frantic googling, I found a couple of very good articles on the Mapping Files for the Import Export Wizard.

Data Type Mapping

Import Export Wizard can show numbers..

I took the advice of the articles and made copies of my Mapping Files before I modified them.  I made my modifications to include the column types and their respective mappings courtesy of the Data Type Mapping article and saved my changes.  I made sure the Import Export Wizard was closed then started it again.  This isn’t so hard, no big deal, they’ll all populate correctly now..WHAT?!  Still <unknown type> for all columns!  Now this has become a matter of solving it using this method, I will NOT resort to brushing up on my BIML.

After many attempts I finally figured out what the issue was.  There were two.  First, the order in which the Import Export Wizard searches through the Mapping Files.  Second, the Source Type within the Mapping File.

According to the Import Export Wizard, my source provider is IBMDA400 and it can’t find any mapping file.  But if you notice the Mapping file for my destination…

For the first issue, a little background on how the Import Export Wizard works.  When you select a source and destination the wizard has to know how to map the data types from source to destination so you don’t end up with gobbledygook in your destination.  So it searches through all the files in the following directories, depending on your architecture (I’m using SQL Server 2016 hence the 130 in the path):

C:\Program Files\Microsoft SQL Server\130\DTS\MappingFiles

C:\Program Files (x86)\Microsoft SQL Server\130\DTS\MappingFiles

The key word there is ALL the files in the directory, so if you just copy your original files to the same folder (with the famous “ – copy” so courteously appended by Windows Explorer), it will search through both your original AND the modified copy of the respective Mapping File.  In my case it was finding the source/destination Mapping File in the original Mapping File and completely ignoring my modified copy <sigh>.  Easy enough to fix, I moved my original “copies” to a completely different folder.

For the second issue, the source type within the Mapping File.  Now I will admit that I had been working on this for about 5 hours straight and had become so obsessed with making this work with the Import Export Wizard, I started to not pay attention to detail.  I want to see how long it takes you to find the issue, this is my file that I think should work.

 

This is the file that actually works

 

Did you find it?  How long did it take you?  Took me about an hour to figure this out

In case you still haven’t found the issue, the answer is:  The SourceType in the first file is using IBMDADB2* and the second file is using IBMDA*.  Since our source provider is IBMDA400 and we are using the first file (IBMDADB2*), there is will be no match on the source.  As soon as we change the SourceType (IBMDA*) we get a match (* is used as a wild card) it works.  Three little letters, that’s all it took for me to waste half a day.

Now what I ended up doing instead of modifying the original mapping file is creating a copy of it, renaming it to something meaningful to me, but still following the naming convention of the Mapping Files, changing the SourceType value to IBMDA* and adding all the data types that were missing.  This way there will be no conflict if I ever need to use the mapping file with the IBMDADB2 SourceType.

I hope this helps someone else.  There’re tons of posts out there about data mapping, but none of them tell you to pay special attention to the issues I had.  Granted my issues were self created, but they were issues nonetheless.

SQL Server 2016, Database Mail and .Net 3.5 Framework

There were so many cheers when Microsoft announced that the .Net 3.5 Framework was no longer a pre-requisite for installing SQL Server starting with SQL Server 2016.  Folks were so excited, until they started testing certain pieces of the product, specifically Database Mail.  That’s right, if you want to use Database Mail in SQL Server 2016 you have to install the .Net 3.5 Framework.

If you are installing SQL Server 2016 on Windows Server 2012 R2, this might be a problem for two reasons.  First, the .Net 3.5 Framework is no longer included by default when spinning up a new install.  Second, you will get no errors when testing Database Mail in SQL Server 2016.  Mail will be queued, but will not be sent and there will be no errors in the Database Mail log, the SQL Agent log or even the SQL Server error log.

So if you’ve gone through all the usual steps to troubleshoot Database Mail (or these steps) in SQL Server 2016 to no avail, be sure to double check that the .Net 3.5 Framework has been installed.  If not, you will need to install it, then apply ALL the patches for it.

Scripts – A Living History

As a DBA, I have a collection of scripts that I use for anything from auto-fixing logins to seeing who has the DAC.  Since I’ve been a DBA for a while (yeah, a while, we’ll go with that) I have quite the collection of scripts and I am constantly adding to it. 

In the Prehistoric days of dinosaurs and floppy disks, I used to keep a backup copy of them on a 3 1/2" floppy.  This was convenient and portable, so if I ever changed jobs, I could take my scripts with me. 

Then we entered the Golden Age of writable CDs and I could burn them to a data CD.  Still portable but a little more durable than a 3 1/2" floppy, I didn’t have to worry about keeping my CD away from magnets.

Carrying a CD around may have been more durable, but it certainly wasn’t more convenient.  Enter the Renaissance Age of USB/Thumb drives.  Holy Cow, I could copy all my scripts to a USB drive and fit it in my pocket, I could take it with me everywhere I went, now that’s convenient!

Enter the Industrial Age and we got smarter about how we did things.  Hello Google Drive.  No more having to carry around anything but lint in my pockets.  As long as I had access to the internet, I had access to my scripts.  Even if the internet were temporarily unavailable, I could still access the scripts on my local hard drive. 

But then a funny thing happened, I modified one of my scripts to accommodate a specific version of SQL Server and accidentally overwrite the original file.  We’ve all been there, that moment when you click the Save button instead of Save As.  All the expletives rumbling around in your head because now you have to remember what it was before you overwrote it.  Enter the Space Age, the days of redundancy checks and fail safes.  We in the development community call it source control.  When Microsoft announced it’s TFS Online offering three years ago, I couldn’t put my scripts in the the cloud fast enough.  Of course the name has changed, but the idea remains the same, source control in the cloud.  The great thing is that you can actually do it for free (for up to five people).

Will you learn from history and protect your scripts or are you doomed to repeat it? 

One Tool to Rule Them All – Almost

There we so many cool announcements at the PASS Summit this year, but one of my favorites was the “One Tool to Rule Them All”. The SQL Server Data Tools (SSDT) teams and the Visual Studio (VS) team have finally teamed up together to give us one tool to do all our development work for Databases, SSIS, SSAS & SSRS. No more will we have to install Visual Studio Shell, SSDT, SSDT-BI and for those source control minded folks (which should be everyone!) that use Team Foundation Server (TFS), Team Explorer. For SQL Server 2016 development we can do one install of Visual Studio 2015 and call it a day, well, almost.

SSDT Install

I was so excited when I got back from Summit, I downloaded SSDT (CTP3) from here. I was so happy to see the install screen.

SSDT Install Screen

There they were, in all their glory, all the SQL Server project types that I needed. No more having to download multiple install files. Oh happy day!

After the install completed, I was a bit dismayed to discover that it took 3GB of disk space to do this install but I guess that’s par for the course any more.

Visual Studio Install

Next I wanted to see if you got all these same project types with an install of Visual Studio. They announced at Summit that “SSDT” would now be “included” with Visual Studio. So I went out and downloaded Visual Studio (CTP3, Community Edition, i.e., free) from here. And look what shows up on the install features list, there it is in black and white, Microsoft SQL Server Data Tools, almost too good to be true.

Visual Studio Features

Well, we all know that if something seems too good to be true, then it usually is. This is no exception.  Let’s see if you can pick out the reason for my disappointment in the picture below.

Visual Studio Project Types

That’s right, the only SQL Server project types that are installed with Visual Studio are database projects. No SSIS, no SSAS & no SSRS. That was very disappointing. Also note that it installed the templates for Visual C#, Visual Basic, etc., when the only feature that I requested to be installed was SQL Server Data Tools. I guess that’s why this install took 5GB of disk space as opposed to the 3GB that SSDT required.

The good thing about the new Visual Studio is that if you use TFS as your source control, you no longer have to download the separate TFS Team Explorer, it is now built in to Visual Studio. No additional installs are required.

Visual Studio Team Menu

Right “out of the box”, you get the Team menu item. However, this is NOT included in the SSDT install. I guess someone thinks we don’t really need to source control our SQL Server projects <sigh>.

Almost One Tool

Because I use TFS as my source control, I still have to do two installs, SSDT to get ALL the SQL Server project types AND Visual Studio so I can add all my SQL Server project types to source control.

This is definitely better than what we have to do now if we are doing development work prior to SQL Server 2016, but it’s not “One Tool to Rule Them All” yet. I’m hoping that since this is a CTP, the final products will contain “all the things”, but I certainly won’t hold my breath.

Now I’m off to test if they’ve overcome the issue of database projects playing nicely with SSAS projects. For those that use the multidimensional model with partitioning, you know exactly what I’m talking about. I’ll keep you posted with my results.

TSQL2sday #68 – Defaults

A big thank you goes out to Andy Yun (b|t) for hosting this month’s TSQL2sday party. This month’s topic is Just Say No to Defaults.

TSQL2sDay150x150

TSQL2sday #68

I have a Bachelor’s degree in Mathematics with a minor in Computer Science. So when I say I’m a fairly smart person, it’s not bragging, it’s simply the truth. So when people say, “Any monkey can be a SQL Server DBA”, I find it offensive. While that statement may be close to true, it’s not the whole truth. Yes, Microsoft made SQL Server easy to use right out of the box, but if you want SQL Server to run well, you better have one of those really smart monkeys.

I can’t tell you how many times I have been approached by my friends that are developers, DBAs for other RDBMs or SysAdmins and asked to “fix” their SQL Server environment. They are really smart people, but they made the mistake of thinking that they could simply install SQL Server, accepting all the defaults, be off and running and never look back. The biggest complaint I hear from these same people is that “SQL Server is slow”. My response is usually something like, “Hmm, I can’t imagine why that would be”, in my most snicker-y voice.

There are so many things that can be tweaked in SQL Server to improve performance, but there are two things you can change right off the bat that will make a world of difference. They are the defaults for database file location and max memory. In fact, if you don’t change the default location for your database files and the max memory setting, a lot of the other performance tweaks won’t make that big of a difference.

Database File Location

When the SQL Server install runs, it asks where you want to put your database files. The default location is on the same drive where SQL Server is installed, which is typically the same drive as the OS installation. Do NOT accept this default, just say NO! If you have a high volume transactional system, this will cause competition with the OS and guess who loses? You do. You should take this a step further and separate out your data files from your log files. And your tempdb should have a drive all to itself. (Note: When I say drive, I am referring to physically separate disks, not a single disk that has been partitioned into multiple drives. If you’re using a SAN, make sure you coordinate with your SAN administrator to get your drives created from the correct “LUN pools”.)

Max Memory

After SQL Server has been installed, the default max memory setting is to basically allow SQL Server use as much memory as it wants. Sounds like a good idea on the surface, but just say NO! SQL Server is very greedy when it comes to memory, it will take every byte your server has to offer, leaving your OS starved. My general rule of thumb is to allocate ¾ of the total memory to SQL Server, never leaving less than 2GB, but not more than 4GB, for the OS.

These are not the only defaults you should change, but these are two that will get you the most bang for your buck. They are easy to change and the implications/consequences of these changes are easy to understand. If you are a beginner with SQL Server, start with these two things, they will make you look like a very smart monkey.