Don’t Forget To Take Break

So many times we get sucked into a rabbit whole when fixing/troubleshooting things we forget to take a break.  More often than not this leads to the “not seeing the trees through the forest” syndrome as my grandfather used to call it.

Sadly this happens to me more than I like to admit.  I am tenacious when it comes to problem solving and I don’t like to “give up”.  However, once I hit that mentality of “I’m not going to let this problem beat me” is when I most need to take a break.

I have found that if I quit focusing on the problem at hand and do something else, often times not work related, for a few hours when I come back to the issue I solve it within 10-15 minutes.

Case in point, just yesterday I was troubleshooting a Power BI Data Gateway issue.  We had the Gateway installed and configured.  The Power BI Service could see and communicate with it so we were all set to create our first data source.

We entered in all our information about the data source and when we clicked the Add button, it came back with a failure.  It was a very generic error that didn’t really tell us anything.  I searched the Power BI Community forums only to find that we had done everything correctly.  I was stumped, but I started down the path of “I’m not going to let this problem beat me!”  Right then is when I should have put it down and walked away.  But no, I kept doing the same things over and over again expecting a different result (and we all know what that leads to – CRAZY town).

I was forced to take a break and go coach my swim team of 6-12 year old kids.  I completely forgot about the problem for 2 hours while I coached and talked with my kids.  When I got home and sat down at my desk, I had an epiphany.  Check the Gateway logs!  Wow, why didn’t I think of that before?! I discovered that the gateway had gotten itself into a bad state so I restarted the gateway service and was able to successfully create the data source using the data gateway.  That took less than ten minutes, sheesh.  If only I’d been forced to coach swim team earlier.

So the next time you encounter a problem and just can’t seem to figure it out, take a break.  Give your brain something else to focus on and you just may save yourself from a trip to CRAZY town.

Speaking at DataGrillen

After speaking at SQL Saturday Iceland last year, which is a smaller event that I absolutely loved (you can read about that adventure here), I decided I wanted to do another smaller event.  They are much more intimate and offer a better chance of engaging with attendees, sponsors, organizers and other speakers.

I heard about a smaller event called SQLGrillen that took place in Germany.  Their motto is “Databases, Bratwurst and Beir”.  How cool is that?!  But alas, I missed the call for speakers deadline, so I kept my fingers crossed that the event organizers would put it on again this year and low and behold they did.  They’ve changed the name to DataGrillen, but it’s still the same cool event.

I submitted a session and to my surprise and amazement, I was selected to speak.  I will be giving my Profiling Your Data session on Friday, June 21, 2019.  Now my German is very limited (Nein, Ja, Bitte, Bier & Bahnhof) but I’m still excited and I hope to see you there.

The event is already sold out, but they do have a wait list.  If you want to go get your name on the list as soon as possible because it’s FIFO.

Auf wiedersehen for now.

Speaking at SQLBits

I got an early Christmas present this year, I found out I had been selected to speak at SQLBits!  That’s what I call a gift that keeps on giving.

I have always wanted to attend SQLBits so I decided that 2019 would be the year I would finally attend.  Since I had decided to attend, I thought, “what the heck, why not submit session?  I’m going to be there anyway.”  But never in my wildest dreams did I ever expect to be selected.  I will be presenting my Profiling Your Data session.

It’s been twenty years since I was in England and I am super excited to be going back.  I have family ties to England so I added a few extra days for sight seeing.  Last time I was there I visited Malvern Link, home of the Morgan Motor Car Company, my dad’s favorite auto manufacturer.   This time I am planning a quick trip over to Liverpool so I can see where my Dad’s favorite band (and one of mine) was started, you may have heard of them, they’re called The Beatles Winking smile

Speaking at SQL Saturday Nashville

I am excited and honored to announce that I have been selected to speak at SQL Saturday Nashville on January 12, 2019.

I’ve been to Nashville before, in fact I was just there last June for Music City Tech, and am super excited to be going back.

I will be presenting my Profiling Your Data session.  If you’re in area and haven’t registered yet, there are still seats available, you can register here.

Feel free to stop by and say, “Hi”, I’d love to see you.

Where to Store Index DDL

Recently I was asked my opinion, via Twitter, on where to store the index DDL for a single database that had multiple clients with varied usage patterns.  You can see the whole thread here.

It’s a great question and there were some interesting suggestions made.  My approach to this scenario is kind of a hybrid of all the suggestions and comments.

I’ve had to deal with this kind of thing in the past and what I found worked best is to create a single file for each client that contains the client specific DDL.  I wish I could take credit for this idea, but it wasn’t mine, it belonged to a co-worker.  At first I resisted and thought it was a bad idea.  I mean really, mixing DDL for more than one object in a single script just seemed wrong and goes against every fiber of my OCD organizational self.  But in the end, this is what worked best in our environment.

Our initial thought was to include our index DDL with the table, but use variables to name the index objects that were specific to the client.  This way the index names would never collide, but then that kind of defeated the whole purpose of different indexing strategies for different clients.  Thankfully we scrapped that idea before we implemented it.

We tried creating separate files for each table that had client specific DDL in each file.  That became a nightmare when it came time to deploy and maintain.  We had to build logic in our pre & post deployment scripts to handle that.

Then we tried separating the index DDL files out by client, so we ended up with a bazillion index DDL files for each table.  Okay, may not a bazillion, but it was a lot and it was even more of a nightmare to maintain.

We settled on the approach I mentioned earlier, one DDL file per client that held all the DDL that was specific to the client, not specific to any object.  We found it was much easier to maintain and deploy.  We defaulted each of our client specific DDL files to be NOT included in the build.  When it came time to do a build/deploy for a specific client, we would set the option to include the client specific file in the build.  We were not using continuous integration, so this may not work if that is what your shop is doing.  Or it may work with just a few tweaks to your process.  It did work for our situation and it worked well.

I don’t think there is a single correct answer to this question.  Like almost everything else in our tech world, the answer is going to be, “it depends”.  Figure out what works in your environment and then go with it.  It might take a few trial and error runs to get it right, but you’ll figure out what works best over time with a little persistence.

I’d love to hear your thoughts on this.

Speaking At SQL Saturday DC

I am so excited to announce that I was selected to speak at SQL Saturday DC on December 8, 2018.

I will be presenting two sessions, What is Power BI? and Data Types Do Matter.  My Data Types Do Matter session is the same session that I presented at PASS Summit 2018, so if you couldn’t make it to PASS Summit this year, now’s your chance to see it.

If you’re in the Washington DC area on December 8, 2018, register for SQL Saturday DC and stop by and say, “Hello”.  I’d love to see you.

Speaking at SQL Saturday Oregon

I am so excited and honored that I have been selected to speak at SQL Saturday Oregon on November 3, 2018.

I will be presenting my Data Types Do Matter session at 10:15 am.  I am so excited to be presenting to a kind of “home town” crowd.  I lived in the Willamette Valley for a while when I was a kid and even graduated from High School out there. 

If you’re in the Portland area on November 3, 2018, stop by and say hello, I’d love to see you!