TSQL2sday #66 – Monitoring

A big thank you to Cathrine Wilhelmsen (blog | twitter) for hosting this month’s TSQL2sday party. Monitoring is this month’s topic and it’s a very important one. It could mean the difference between having a job and looking for a job.

TSQL2sDay150x150When I started working with SQL Server (a long time ago, in a galaxy far far away) there were no commercial monitoring tools available and quite often I would get called or paged (yes, it was a long time ago) in the middle of the night by a very angry boss because there was something “wrong” with the database. Or worse yet, I would get no call at all and show up at work the next morning with a line of angry people waiting for me when I got off the elevator. It only took a couple of these encounters for me to realize that I needed to be much more proactive or change my line of work (I had heard that underwater basket weaving was an easy gig).

I started looking at the reasons I was being called and discovered most of them were things that could easily have been avoided if I had known about them earlier. Things like database and transaction log files filling up, running out of disk space, processes/queries that were taking increasingly longer and longer. Since there were no commercial monitoring tools out there I decided I needed to essentially roll my own.

I had to start looking under the covers to find what I was looking for. This gave me an even greater exposure into how SQL Server worked. Did I mention that this was before Google? I couldn’t just search for easy answers, I had to really dig in the system databases to find what I wanted. This was in fact, one of the best things that could have happened to me so early in my career as a DBA. I was forced to learn how SQL Server worked on my own.

To this day, I still “carry” around my home grown monitoring solution in my toolbox. I have updated it and expanded it through the years to accommodate newer versions and functionality and made it more efficient based on both of those things. Not all shops have the budget for monitoring tools and even if they do, a lot of the time they are only willing to spend that money on production servers, not development or test (don’t get me started, that’s an entirely different blog post).

My little monitoring solution has come in handy over the years because it has afforded me the opportunity to discover what’s under the covers of the newest version/features of SQL Server and provide a no cost basic monitoring solution to my employers when the budget is tight or non-existent. If you don’t have your own monitoring solution I would highly recommend you create one, if for nothing more than the reasons I stated above.

Don’t get me wrong, I love the commercial monitoring tools that I have access to now, but knowing the how and why of SQL Server will only make you a better DBA and could possibly mean the difference between having a job and looking for a job.

Managing Security – TSQL2sday # 63

A big thank you goes out to Kenneth Fisher ( b | t ) for hosting this month’s TSQL2sday party. Security is a big deal. How many times have you opened the paper (I’m dating myself, I know – no one reads an actual newspaper anymore, it’s all online now) in the last 6 months and there’s a story about another security breach, more records compromised or flat out stolen? Too many. While securing your data is probably the key to keeping your current employment status, there’s also a piece of security that is quite often overlooked and could be the reason for a resume generating event. Recovering from a failed server when you don’t use any of the HA features that are now available.

TSQL2sDay150x150

The scenario:
Your production server has failed and you don’t use any of those new fancy HA features like Always On Availability Groups, Log Shipping or even Database Mirroring. Your server hosts a standalone instance for the HR/Payroll department. Payroll must be processed in the next two hours or your company will be out of compliance with Federal Regulations and face heavy fines, not to mention all the really mad employees who won’t get their paychecks on time. I don’t know about you, but I do NOT want to be responsible for every employee not getting a paycheck, including myself.

You have a good backup plan in place, you take full, differential and log backups on a schedule that meets the minimum required data loss SLA and send those backups to a remote SAN data store. Your Sysadmin stands up a new standalone server for you in 30 minutes. You install and configure SQL Server in about 60 minutes (those pesky service packs and cumulative updates can take quite a bit of time). Now you are left with 30 minutes to get your databases restored and functioning. No sweat! Easy as 1..2..3, right? Wrong!

You restore your database only to discover that all your logins no longer exist on your brand new server. No problem, just recreate the logins and give them brand new passwords (SQL Authentication). All will be right with the world. You give your HR/Payroll department the okay to proceed and you catch your breath with 20 minutes to spare. The phone rings 5 minutes later, it’s HR/Payroll and it’s not working. They are getting invalid login errors. You have that momentary flashback to when you helped with the application install 4 years ago – the vendor hard coded the password into their application code, so you can’t just change it or give it a new password. That’s when you remember that you created a job to script the logins with their passwords on a weekly basis and saved the results off to file on that same remote SAN data store as the backups. Hallelujah! You find your script on the remote SAN data store, clean up the logins you created, then execute the script with the logins and their passwords. HR/Payroll is back up and running with 4 minutes to spare.

Paychecks for everyone!

While some of this may seem far-fetched, it’s based on an actual incident very early in my career. I may have embellished a little, but you get the point. You need to make sure you can recreate any login on your server at any time due to disaster/failure. If you can’t, you may just be looking for a new job.

To this day I still script the logins on all my servers on a weekly basis. I store that file in a secure location on a remote server. I’ve never had to use one since this original incident, but it’s nice to know that I can recreate the logins if I ever need to. Can you?

Transaction Isolation Level Blues

Have you ever had a mental block in one particular area when learning something? It might be the simplest thing, but for some reason your brain turns to Teflon when you try to store the information. For example, I have a degree in Math, so I am pretty good at arithmetic, but for the life of me I cannot remember what eight plus five is. I always have to break out my phalanges to get the answer.  Why the Hell can I remember what phalanges means and not a simple thing like eight plus five?!

I have this same problem when it comes to Transaction Isolation Levels in SQL Server. I can remember that there are five of them, Read Uncommitted, Read Committed, Repeatable Read, Snapshot & Serializable, but I cannot remember the little nuances that set them apart. It’s total Teflon. So I decided it was time to come up with a little song to help me remember. My older sister is a preschool teacher and she says that if you learn something as a song, it sticks with you for life. Here’s hoping that is true!

This is sung to the tune of George Thorogood’s Bad to the Bone.

At the time I am used
No Shared locks are issued
Not blocked by X locks
It is Loosey-Goosey
Just call me crazy
No restrictions abound
I could tell right away
It was Read Uncommitted

Bad to the bone
Bad to the bone
B-B-B-B-Bad
B-B-B-B-Bad
B-B-B-B-Bad
Bad to the bone

Not breakin’ any rules
Going by the book
Not readin’ any uncommitted
Transactions it’s true
I am the default baby
Transactions alone
I’m Read Committed
That’s what I do

Bad to the bone
B-B-B-Bad
B-B-B-Bad
B-B-B-Bad
Bad to the bone

No readin’ ’til committed
Can’t read dirty data either
I use shared locks baby
And hold ’til committed
I’m the repeatable read baby
Yours and yours alone
Data’s all yours honey
And I’m bad to the bone

B-B-B-B-Bad
B-B-B-B-Bad
B-B-B-B-Bad
Bad to the bone

When I query data
Kings and Queens step aside
Every bit I meet
It’s mine it’s all mine
Serializable baby
Range blocks on keys that’s me
HOLDLOCK does the same thing baby
Serializable oo-ee

Bad to the bone
B-B-B-B-Bad
B-B-B-Bad
B-B-B-Bad
Bad to the bone

(Extra verse)
There’s no write blockin’
While I’m readin’
No locks less I’m recoverin’
You can’t switch to me
But I can switch to you
I’m a snapshot baby
A photo just for you

Bad to the bone
B-B-B-B-Bad
B-B-B-Bad
B-B-B-Bad
Bad to the bone

If your brain is Teflon when it comes to Transaction Isolation Levels, then I hope this helps. If not, I hope you got a good laugh and please don’t tell George Thorogood what I did to one of his best songs (and one of my favorites).

By the way, eight plus five is .. thirteen.

Summit 2014

It’s hard to believe it’s over.  It felt like a whirlwind while I was in Seattle for my 7th PASS Summit, but now that I’m back home it feels like it was ages ago.  I think time moves more quickly when you’re with friends and that’s where I was, with friends.

I got to reconnect with old friends and meet new ones.  I didn’t attend nearly as many sessions as I would have liked, because let’s face it, cloning technology isn’t quite where it needs to be as Michael Keaton found out in Multiplicity.  With my luck my “Number Four” would have attended one of Paul Randal‘s sessions and I would have wound up doing God knows what to my servers when I got back.

I also got to meet people that I have “worked” with for quite a while virtually, but never met in person.  I must say it’s always refreshing when their “in person” exceeds your expectations.  There are so many genuinely nice people in our community, I am truly in awe.

In years past I have not been able to participate in most of the after-hours activities due to Summit happening right before a big annual swim meet, which meant I couldn’t take a break from training.  This year, my swim meet was the week before Summit so I didn’t need to get up at 4:30 a.m. every morning to make it to practice before breakfast.  I got to see how the “other half” lived at Summit this year.  I must say it was eye opening and entertaining.  They don’t have next year’s swim meet on the calendar yet, but I have the Summit dates, so next year’s meet just may have to go on without me.

If you’ve ever attended a PASS Summit, you know what I’m talking about when I say I’ve already started the count down until next year’s Summit.  If you’ve never attended a Summit, what are you waiting for?

Data Driven Subscription On A Budget, Part 2

Data Driven Subscriptions On A Budget – Part 2

Yes, this is Part 2, you can find Part 1 here to get background information.

This blog will talk about Point 3 – Distribute a report to a fluctuating list of subscribers.

Distribute a report to a (fluctuating) list of subscribers

When using email as your method of delivery for a Data Driven Subscription, best practice is to use metadata to drive that process. Usually a simple table that contains the email address of the recipient and the report name does the trick. This part of the process is no different if you don’t have Data Driven subscriptions. I usually create a table similar to this:

CREATE TABLE dbo.SSRSEmailSubscribers
(
EmailAddress varchar(128) NOT NULL
,ReportName varchar(128) NOT NULL
)

Let’s say I have a sales report that needs to go out on a daily basis and the standard format for this report is Excel. Because we don’t have data driven subscriptions, we can’t just query the table and use the resulting list to email the report. Instead we need to create a File Share subscription that generates this report and saves it to a file share. From there we can “pick up” the newly generated Excel file and email it to the recipients.

  1. Create a subscription to your Daily Sales Report in Report Manager, schedule it for a one time execution at a time just a few minutes in the future, and remember the execution time. (This creates the SQL Agent job in SQL Server.)
  2. Take a look at your SQL Agent jobs in SQL Server. If you have never seen or noticed a job created by SSRS, then you will be wondering where your job is because SSRS does not use human readable names for its jobs, it uses those pesky GUIDs as names, ugh! If your server has lots of jobs you may need to sort your jobs by Category to get all the “Report Server” jobs together. Find the job that executed at the time you scheduled (this is why you needed to remember the time!), this will be the job you will need to reference in the next step.
  3. Create a new SQL Agent job and add a step for Transact SQL script. In this new step you need to execute the SQL Agent job that you created back in step 1:

exec msdb.dbo.sp_start_job N'B514C05F-07D5-4C0B-9600-666E9980C7C3'

    where B514C05F-07D5-4C0B-9600-666E9980C7C3 is the GUID from the job that SSRS created.
  1. Next you will need to add a new job step for PowerShell. In this newly created step write your PowerShell script to get the file that was generated (as a result of executing the previous step) and retrieve your list of email addresses. Once you have this information you can send the email with the report attached. There are several ways you can do this, but I chose to use PowerShell. Since PowerShell requires a double quoted semicolon (;) delimited list of email addresses when using SMTP, I wrote my SQL query to return a double quoted semicolon (;) delimited list of the email addresses. You could have just as easily used PowerShell command-lets to format your list. Here’s my PowerShell script:

cd c:

$FilePath = "c:\temp\"
$smtpServer = "10.0.0.4"
$smtpFrom = noreply@email.com

$AddressQuery = "DECLARE @List varchar(MAX);"
$AddressQuery = $AddressQuery + "SELECT @List = COALESCE(@List + '"";""', '') + EmailAddress "

$AddressQuery = $AddressQuery + "FROM dbo.SSRSEmailSubscribers "
$AddressQuery = $AddressQuery + "WHERE ReportName = 'Daily Sales Report'; "
$AddressQuery = $AddressQuery + "SELECT '""' + @List + '""';"
Invoke-Sqlcmd -Query $AddressQuery -ServerInstance "MyServer" -Database "MyDatabase" -Variable $smtpTo

$messageSubject = "Daily Sales Report was executed"
$latest = Get-ChildItem -Path $FilePath -Filter "*.xlsx" | Sort-Object CreationTime -Descending | Select-Object -First 1

$FullFileName = $FilePath + $latest
$body = "Attached is the Daily Sales Report"
send-mailmessage -from $smtpFrom -to $smtpTo -subject $messageSubject -body $body -smtpServer $smtpServer -Attachments $FullFileName

Now schedule this newly created SQL Agent job for the time you need your Daily Sales Report to be executed and Wa-La, you now have a data driven subscription that distributes a report to a fluctuating list of email addresses.

I Didn’t Hyperventilate!

I gave what is officially my second presentation this week. I presented at the Triad SQL BI (Twitter | PASS) user group meeting and I didn’t hyperventilate! That’s a huge deal for someone like me, who is petrified of public speaking.

It started out a little rough though.

Timeline
Friday, 8/22 (just 4 days before presentation date) – I met Katherine Fraser (SQLSassy) for lunch and she mentioned that their scheduled speaker had just cancelled on them the day before. I asked her what she was going to do and she said unless I wanted to present for her, she had no idea. I jokingly said, “Yeah, sure, I’ll present”. Do not EVER, tell a chapter lead you will present, even if joking around because they will pounce on you! Lesson learned there. I agreed to present a very introductory session on SSIS. I then went home and started to panic.

Saturday, 8/23 – I woke up with a horrible sinus headache and thought I was in the beginning of nasty sinus infection. Now I really started to panic. I sent Martin to the drugstore to buy every sinus medication they had on the shelf. There was no way I could be sick, I could not cancel on Katherine after I had just agreed to present the day before. I proceeded to pound down some Emergen-C and drink about a gallon of water an hour for the rest of the day.

Sunday, 8/24 – I woke up at 4:30 a.m. to take part in the upgrade of major system at work. I felt about the same as Saturday. I pounded some more Emergen-C and worked until 11:30am. After we got the green light from the testers at 3:30 p.m., I went to bed and collapsed.

Monday, 8/25 – Woke up feeling much better, but not great. Pounded more Emergen-C. Started to work on my presentation. Did I mention that I didn’t have anything prepped for a presentation? I’m not a speaker, why on earth would I have a presentation ready to go? Got a call from my boss that the system upgrade wasn’t going so smoothly and had to start firefighting in production.

Tuesday, 8/26 – Presentation day. Got the word from my boss that the system upgrade was still up in the air, but none of the pieces that were broken were anything I could help with or fix. I started to work on my presentation. Just before lunch time I was told I had two conference calls I needed to participate in. Great, another two hours I don’t get to work on my presentation! Finally done with conference calls, when I got a call from my boss, we are rolling back the upgrade and I need to bring the old server back online. Luckily I had been able to create the content of the presentation and test it. I just didn’t have any time to do a practice run through. That was going to have to be enough, it was time to go to Winston-Salem.

I arrived in plenty of time, but I forgot: the power supply for my laptop, my external mouse, speaker evaluation forms and my list of things I needed to take with me to the meeting. Luckily my laptop was fully charged and didn’t die during the presentation (in fact I could have gone on for another 2 ½ hours, thankfully no one wanted to stay that long!). A mouse was provided by our wonderful host, Matt Clepper of Inmar, but not before I had a slight mishap using that @!#$%^& embedded mouse on my laptop. Katherine was well prepared and brought speaker evaluation forms. As for my list of things I needed to bring with me, well, I just had to “adapt and overcome”.

The presentation went pretty well, I didn’t hyperventilate. Sometimes you have to have a very simple goal, just survive without a trip to the ER.

Wrap up
Overall it was a good experience. I think I did a good job of presenting and the feedback I got reinforced that. There were some great ideas on what I could have done better and some great comments on what I did well.

Will I speak again? Probably. I’m not sure I’m ready for a SQL Saturday quite yet, but maybe another couple of times at UG meetings and I’ll think about it. A huge “Thank you” goes out to Katherine for taking a chance and believing in me.

Of course I didn’t sleep at all Tuesday night. I kept thinking, “I forgot to tell them…”

Summit 2012 is over, but not forgotten

What a whirlwind last week was.  There were so many sessions to attend and only one of me.  I needed to clone myself a couple of times over in order to catch all the sessions that I wanted to see.  Unfortunately, that technology is nowhere near ready, so I had to choose wisely.

I attended Jason Strate’s (@StrateSQL) session on Extended Events, which was awesome.  I have used extended events in the past, but not to the extent in which he covered.  Wearing the DBA hat again after so many years of not wearing it, this session will be a time saver for sure.

I attempted to attend Bob Ward’s (@bobwardms) session on SQLOS, but was turned away 10 minutes before the session started because it was already full.  Lesson learned here:  Stake out my spot in Bob Ward’s sessions even earlier next year.  He is a smart dude and a great speaker.  Looking forward to the recordings being released so I can see it.

I attended a couple of sessions with Matt Masson (@mattmasson), including the BI Power Hour, which are always fun and educational – Thank you Matt (& Mr Wiggles) for making learning so much fun!

I was also able to attend one of my SQL Heroes’ sessions, Kalen Delaney (@sqlqueen), on the internals of columnstore indexes.  Kalen Delaney is so smart it makes my brain hurt.  I started out as a fan of hers when I purchased my first SQL Internals book about a million years ago.

After all these years, I finally got to attend a session with Kevin Kline (@KEKLINE) and David Klee (@kleegeek) on Virtualization.  Wow, the depth of knowledge between these two surpasses the Mariana Trench.

Wrapped up my Summit experience with a couple of sessions on Visual Studio 2012, a.k.a., SQL Server Data Tools.  I was looking forward to seeing Gert Drapers of MS, but his cohorts, Barclay Hill & Adam Mahood, did just fine without him.  Also picked up a few new things from Robert Cane (@arcanecode) on the GUI.

Thank you to all the speakers for taking the time to share your knowledge with the SQL world.

Thank you to all the volunteers and PASS HQ staff.  It would not have been as successful without you.

Replication and TFS

 So, we’ve had this issue lately at work with our TFS build/deploys causing a significant amount of contention and only partially succeeding. 

A little about our environment: 

  • SQL Server 2008 R2 Enterprise Edition, 64-bit.
  • Visual Studio 2010
  • Team Foundation Server (TFS)
  • Using Transactional Replication
  • Using partial projects for replicated objects
  • 31 database projects in our solution

When we looked at the build log, we saw that several of our largest tables were being rebuilt, but not on a consistent basis.  At a cursory glance we could not see why, they appeared to be the same as what was in TFS.  After a couple of weeks we discovered that the tables were only being rebuilt right after a replication snapshot had been re-initialized.  The tables that were being rebuilt were tables with identity columns that were being replicated.  Turns out we had some mismatched configurations between TFS and replication.  When using partial projects in TFS to manage replicated objects, you cannot use the NOT FOR REPLICATION attribute on your identity columns.  You must also “manually” manage the identity ranges in your replication publications.

 Tips/Reminders when using TFS and transactional replication.

  1. Create a partial project in the publication database project containing all the tables and their respective attributes that you want to replicate.
  2. Add a reference to the subscriber database project by importing the partial project created in step one.
  3. Do NOT use the NOT FOR REPLICATION attribute on the identity column in the tables that are being replicated.  This is counter intuitive, but necessary if don’t want your tables being rebuilt every time between a snapshot initialization and your TFS build. 
  4. When you create the replication publications, you will need to set the identity column management property to manual for the articles that have identity columns.  Again, counter intuitive, but necessary.  Another thing that we had to do was remove permissions from our users to directly update the tables in the subscriber databases.   Our database supports a web based application that uses a generic service account to access the database.  We had an issue when new/inexperienced support staff would directly modify data in the subscriber database.
  5. In your .sqldeployment file(s) check the option named IgnoreNotForReplication.

The reason for all this hoop jumping is because the tools do not support database replication, nor do they support Change Data Capture (CDC).  These are great features, but when they are not supported in the tools that are available, it makes it very hard to want to use them.