So with everything else being SQL …
What do you do when faced with a serious compliance problem on our SQL deployment? Easy … make sure everything isequal (okay – enough with the bad jokes).
Two years ago, I saw possible compliance issues with our SQL deployment. I gathered information on both our workstation and server counts. I immediately found out we had very tenuous data from the WINTEL group. They did not want an agent sitting on their server boxes so I received a one-day snapshot from their team which I was able to merge it into the desktop data. Flash forward to the present day.
Serving Up a True Up
I recently received a new request that our SQL team wanted to true up and bring the company up to a SQL 2012 standard (as possible). As part of the overall effort I would need to refresh the numbers and see our compliance level. The server guys relented a little and provided me with a 10,000 foot view report from TADDM (Tivoli Application Dependency and Discovery Manager). That data was light (to be professional about it). It had a cryptic machine name (not a UUID or bios ID) and only a cursory description of the product. For example, it would not indicate the edition – standard, enterprise, dev, OR EVEN THE EXPRESS MODEL! It also seemed that if a machine had multiple service packs it would list line items for SP2, SP3 and SP4 as multiple installs. This report was merged with the OLD server inventory report (the one I had gotten from 2 years ago) and then a real time workstation report created using the HP DDMi discovery tool. Now came the fun part – normalizing, analyzing, and categorizing.
I had the names of the users in each report so I was able to do a sort by that category. I was also able to get the units sorted in another tab by computer name (even if they were cryptic). Using those criteria, I was able to start seeing patterns (using the Excel conditional format tool). Where DDMi and TADDM aligned I could assume that there was likely to be instances of the IBM product double (or triple) counting an installation. DDMi told me about instances where SQL products were bundled with another application. I also cross referenced the data against our MSDN user list. Sure enough many deployments were from this source.
Clean Up Begins
It was now time to start the clean up process beginning with cleaning up the report itself. I quickly eliminated the units where a machine install was from MSDN (verified by contacting the user) and where DDMi indicated the source was a bundle scenario. Next, I looked for machines showing multiple service packs as unique installs. I contacted those users and verified that they in fact had only a unique single instance. Those all cleaned up the report.
Now it was time to remove any installed software that we could. The first category was to remove unused software. ONLY DDMi provided a usage report. It was easy to see machines that either had zero or very low usage. Each of those users was notified of that fact and almost everyone did a removal to help us out (and thus not spend money having a tech do a service call). I next looked at deployments based on machine name and then SQL version. We had many machines running multiple SQL versions. Each of those users was questioned about whether that was intended or not. When possible, we asked them to remove all older versions and got tremendous cooperation. Those were the “low hanging fruit” of this operation. We had programmers and support staff who reported needing multiple versions. Our records were updated to reflect that fact.
Our next action was to identify those machines running SQL without an edition type. Here is where it got tricky. We had to have users run scripts to identify many of those unknown deployments – Microsoft does not allow you to go into help about to get this data! Using scripts and other tricks our users provided vital data to get this information. I even was able to look at folder names to determine if something was a paid version or the free EXPRESS model. As part of this project I also verified that an express version wasn’t in use for production and that an enterprise edition was only used where they knew that level of the application was warranted. Phone calls and emails cleaned up those issues.
Building the Spreadsheet
All of this data went into a summary tab on the same spreadsheet. The best approach for the spreadsheet tables was to have purchases in one tab, the deployment values in another tab, and the usage values in a third tab. Totals were linked to a master table so that when figures changed on other tabs, the values dynamically shifted. I could see real time values and areas to focus on. The use of color, where negative numbers were red and positive numbers black helped highlight problem areas.
When we started this project, the original data had almost every number in red including the worst (and scariest) shortfalls for enterprise products. Those are expensive and you did not want to true those up if not needed. Through the project described, there were many more figures in black then in red. At this point, I was able to use the Microsoft policy for covering older version licenses through a newer version license to reduce the discrepancies between licenses and installed even further.
In the end, we only needed to buy a few items and fortunately, none in the enterprise category.
The results were much better than the initial scary discrepancies. Calculating the savings based on a baseline from the original data to the project’s end result, we had successfully concluded a six figure dollar clean up. The project took a couple of months but was well worth the effort.