So what do you do when you have been tasked with finding all of the needles in a BIG corporate haystack? I say, “Grab a strong magnet and get to work on counting”. I was hired as the first asset manager at a major art institute a few years back. One of the first tasks I had to accomplish was creating a master CMDB of every technical asset owned by the organization. The first step was to determine what we wanted to track. I convened a meeting with the network guys, a representative from telecom, and my direct manager. Did we want to track mice, external drives, keyboards, and sound related items (speakers, microphones, etc.)? That was easy, we said ‘”no” to all of these devices. Did we want to monitor monitors? We went back and forth and voted ‘no’ to this one too. You might wonder why we made this particular decision. We came to a consensus that almost every deployed unit was a CRT and worth less than $150. They were not only below our financial threshold, but usually not worth repairing in case of failure. (Note that this was back in 2004 when flat panels were still the exception rather than the norm – it was only in 2005 that we decided to switch to that format.) That didn’t leave much for us to inventory. We basically had to track CPUs, printers and select externals such as scanners (laptops of course being lumped into the first category). Telecom told us that we didn’t need to worry about their stuff, but the network guys asked to have all their servers, routers, switches, hubs, etc. included.
Let’s Begin with Discovery
We had some data on what was out there but knew it was incomplete and inaccurate but it did give us the approximate number of computers we would find. We also knew the number of desktop techs that could be assigned to this project. Luckily, our contract with our support vendor included a clause to cover this inventory project. Each year we could run an inventory as part of their efforts. As an aside, I highly recommend to the reader adding this clause to your master contract (assuming you don’t have one) since the payback is significant. Just make sure there are no exorbitant fees for the effort. We knew we would be asking the team to gather information on the various units found out there. The real question was what data was required for logging. If they were already on the network, and running correctly, the software agent would already report all the internal information from the CPU unit. We would know memory size and type, disk information, software knowledge and patching levels. We saw no need to boot up the machines and this was a HUGE time savings. We would only need to do this data gathering on the few lab or non-networked units and that would mean creating survey floppies and memory sticks (we wanted to be sure we could cover every possible machine layout).
Presuming that the data collected on the connected machines was accurate; we produced a master list of all CPUs. This ONLY included Windows based units. DOS, MAC, and all the flavors of UNIX would need to be manually added due to the limitations of our software inventory package. As stated above we would also need to check standalone units. The assigned tech would need to boot up these exceptions and manually gather their data using the aforementioned storage media Work would be done over the weekends since we did not want to affect staff. We figured 5 minutes per machine even though we anticipated most units would have their data gathered much faster (better to get our work done sooner then go overtime). We hoped not too many machines would have access issues due to their desk layout. To lessen that condition, we would ask each affected area to please prep for the survey by clearing the computer of toys, plants, and papers. Each weekend had a four person staff allocated. Since we had an approximate count of total computers that were deployed, I could break out the counts of devices by building and floor creating sub-totals for each area. Doing the math, I was able to determine how much work could be done as well as how to split up work. We figured that the team could do between 150 and 200 computers per day (about 300-400 per weekend). I did not want them hopping from building to building so that was factored in, as was visiting sites outside the home campus (we had 4 external sites that needed to be surveyed). I put together the master schedule broken out by calendar dates (of course taking into account holidays [since we were doing this towards the end of the year]).
It was time to get the teams out searching for lost assets. The team leader was supplied with a master list along with a subset of the batch they would need to find for that allocated time and region. Additionally, we also provided extra stick-on asset labels. We figured there was an excellent chance that the teams would find items missing tags or assets that had tags but somehow weren’t in the inventory. Of course we found assets in both of categories (I recommend research on these since you may discover that items were purchased outside the normal system and that has to be followed up and prevented n the future). To speed things up, and aid in difficult accessibility issues, we wanted to use the bar codes on each label. Each team had a scanner unit assigned to them and trained in the care and feeding to make the data useful. Of course we had the memory stick floppies packets ready. We even got a few mirrors that were flexible and had extension wands to reach behind devices (yes they did see use…).
As part of the negotiations with our vendor, we had senior management sign off on this project but the information now had to pass down to departmental level managers. They needed to know dates, effort, intent, risk and fall back, and access to a person in case they had questions. We also had to let them know that we would be visiting EVERY machine that they were in charge of. Since we had senior management backing they signed off on the project (amazing how that works LOL). We also got approval on our notification message: we had to have every user turn off their machine and additionally gather any sign-on passwords (in case something necessitated our powering on their unit). This could include BIOS passwords, network logins, screensaver and other security protection. They were told that if the information was not available their machine would still be accessed as needed (using system administration tools). We debated back and forth about anything else we wanted to do with the machines. Because we would be ‘touching’ every machine, this was not the time to do anything in terms of maintenance or upgrades of any sort. We decided the inevitable scope creep was not worth the risk – not to mention that this could eventually be managed remotely (as previously mentioned). Lab equipment and stand-alone machines were the only ones we MIGHT need to touch, but we ran a risk/reward analysis and decided to leave those alone also. Most every lab machine had highly sensitive data and applications on it and any changes could greatly impact their functionality. We just noted them and informed their management of the risks and how we could go forward in the future to help with risk mitigation and security. The reader might decide otherwise, but remember that this can become a serious time sink.
Time passed, equipment was scanned or manually read when necessary (hurrah for the mirror sticks …). Items without a tag had a new bar code label added and this data noted on blank asset report sheets. If they were already in the master list then they were checked off. Secondary devices were noted as child assets to a parent computer. Printers had their queue names noted (when they were network connected) and also had their physical locations shown. We didn’t just note this on the spreadsheet. We also placed the information on blueprint maps we got from our facilities group. This last set of information became critical to the follow-up printer project that I described in a separate article.
Data was taken off the collected sheets and logged into our CMDB. This ensured that our database had the most up-to-date data available. We concentrated on discrepancies and verifying that the associations between child and parent devices were established. After a couple of months passed, we had finally completed the inventory process. We had over 6100 C1 units and discovered we had about 80 devices that we still could not find. It was time to put on our detective cap and figure out where these things might have gone. Now, if you were a missing piece of IT inventory where would you possibly hide? Our first guess was that we disposed of the items and did not track this fact. Sure enough, after we brought out ALL of the reports from our data disposal firm (both what we showed as removed AND what they reported back to us as removed) we found many missing items had been removed. This brought the numbers into the 40-50 range. Next, we looked at the information we already had on the devices. In many cases the units indicated ownership or location. We contacted the allocated users and many of them remembered those parts and what happened to them. They told us they disposed of them on their own (unauthorized but what can you do other than smack their wrist and tell them never to do it again). They also would walk us over to their storage closets and show us items they had packed away “as emergency back-ups”. We informed them that we had spares and would cover those concerns. We also told them that a deployed asset counted against the contract. We either gathered these up as obsolete or indicated they were not in use so our contract numbers would not be negatively impacted. This turned out to be the biggest ‘win’ of this portion of the whole process. This step got us to a little over a dozen missing units. We even visited the technician’s work areas. We discovered old machines and other bits and pieces stashed away. We marked all of these units as ‘stored’ rather than ‘deployed’. We were now into single digits! The last step I took actually caused a few raised eyebrows in the network sector! I did a Google desktop search for these last few missing items. I searched the email archives as well as on the network storage array. I STRONGLY suggest thinking about this one before you try it yourself. There are not only privacy and security issues involved but you do have a chance of creating a LOT of network traffic. Sure enough, I found email messages that a couple of units had gone with employees no longer with the company and one item (a printer) had been lent by the marketing team to a publicist in New York.
Our project was now complete. Of the nearly 6100 original items we had narrowed down missing items to about 7 devices. Luckily none of these were units of any value. We could not find one ancient printer and six 15″ monitors – all items not worth researching further. We just wrote those off and we were DONE. We had created a solid starting baseline, ensuring future inventory changes would go against a known accurate database. The data told us how many computers we had enabling our support contract to be reduced. It also told us what printers we had, so we could make smart decisions on deployment, (as described in the other article mentioned above). It told us the number of notebooks and desktops we had so we could track risk and deployment. The most amazing thing was the unexpected help it provided the data center guys. Since they did not know what servers they had, their locations, and their loaded software, the value of data we provided was hard to quantify. We knew it would impact backups, risk assessment of load balancing, contracts and optimized usage, but placing a dollar value to that knowledge was something we left to their group. The group’s manager was extremely thankful for what we were able to provide. I must say it was scary that they didn’t already have that information but this was a win we didn’t even expect.
BTW… I won an award for the successful completion of this project.