Capturing Asset Change Data – Catch Point Strategies

By Brett Husselbaugh

Competent capture of asset change data is often overlooked and an under-designed aspect of the typical ITAM solution. Accurate data capture necessitates tight integration led by process and strongly supported by automation. Even though there are many ITAM tools available, clients are largely left to figure this part of the solution out on their own since the process is usually specific to them.

IT Asset Management Repository – Accuracy is a Struggle

Anyone who has attempted to build an IT Asset Management (ITAM) data repository or has used a home-grown database on a commercial off-the-shelf tool has most likely experienced data accuracy issues. While some tools may assist in maintaining accuracy better than others, the fact remains that ITAM is largely about processes and if the process breaks down or is ill-defined, the result is degrading data accuracy regardless of the tool being used. We have seen implementations that degraded as much as 25% in one quarter.

Catch Points and Data Capture

A catch point is an identifiable step in the life of an asset where changed data can be captured and recorded. Catch points generally already exist when implementing ITAM but they usually require some type of effort in order to capture the asset data that changes due to that “step” or activity. Some examples of catch points are:

  • Receiving
  • Install/Move/Add/Change (IMAC)
  • Break/fix
  • Decommissioning
  • Disposal

Since assets tend to go through these steps in their natural life cycle, some processes usually exist to perform the given function. When developing an ITAM program, the attempt is made to add onto the existing processes, leveraging technicians and/or others to perform ITAM’s data collection as they perform their primary job(s).

How data capture process is designed and subsequently automated will significantly impact data accuracy.

Classes of Data

There are three classes of data elements that must be considered when designing data capture process and automation.

The three classes are as follows:

Static Dynamic Component
Information about the asset that never changes over its life. Examples: manufacturer, serial number, product description, PO, purchase date, purchase cost, asset number  Information about the asset that may change several times over its life. The information is usually what ties the asset to the business and is often most valued by the business. Examples: cost center, location, assignee, status  Information about internal components, both hardware and software, of intelligent assets. This class of data is addressed almost exclusively by auto‐discovery tools. Examples: processor(s), core(s), speed, LAN adapter(s), IP address, device name, installed software
Design process to capture static elements once and only once and then protect against downstream attempts to change Attempt to capture dynamic elements at as many catch points as possible. Design redundancy into the capture to increase accuracy Leave capture of component elements to the selected auto‐discovery tool(s)
Data Capture Design Principles to Follow

When designing automated means for capturing asset data changes that will be manually entered, adhere to the following design principles:

  1. Select the correct capture media for the job
  2. Avoid multiple levels of screens and attempt to get everything on one screen
  3. More distinct single-purpose screens are better than one multi-purpose screen
  4. Only attempt to capture what the person performing the life cycle step (catch point) will reasonably know at that step
  5. Avoid requiring a password to access the screen, if possible
  6. Design capture to be as close to the actual activity as possible, both in time and place
  7. Use data validation rules in a way that reflects the real world, but do validate.
  8. Avoid look-ups
  9. Do not present known data and ask for validation
  10. Avoid paper and/or email
1: Select the correct capture media for the job.

There are two primary automation choices for collecting process-driven ITAM data; a PC/terminal or a hand-held scanner. It is important to choose the proper platform for the given catch point. Hand- held scanners are not just miniature PCs and yet too often the designers of capture systems that involve hand-held scanners treat them as such.

They design screens that require extensive typing of information, selecting from drop-downs, and/or looking up of existing data. The general rule of thumb to follow is: If the user will be required to perform a moderate to heavy amount of typing or entering of keystrokes then use a full sized keyboard and screen (PC or terminal). Otherwise, expect non-compliance due to cumbersome and slow user experience and/or higher error/omission rates.

Proper design of hand-held scanner-based data collection methodologies is an art form in and of itself, easily deserving of a separate article dedicated to the topic. Design considerations that require careful thought include Preceder codes, bar-code size and coding, flow of work, size of target tags, and tolerance of reading different bar-coded data elements in any order. Seldom does a PC-based data entry screen ported to a hand-held scanner work well.

Hand-held scanners work best at the receiving catch point, the decommissioning/disposal catch point, and for routine floor-walk mini inventories.

For many catch points, however, a PC/terminal-based data entry screen is the proper choice.

2: Avoid multiple levels of screens and attempt to get everything on one screen

Nothing creates accuracy/omission issues as well as lack of compliance faster than requiring a technician or other user to use a cumbersome screen with several levels of up/down navigation. These individuals are usually capturing asset change data as an ancillary task to their primary job. The screen must be intuitive and instantly obvious in how it is used such that a casual user can immediately understand what to do and how to do it. It must also be fast to operate if you are to expect the intended audience to use it.

3: More distinct single-purpose screens are better than one multi-purpose screen

This goes hand in hand with points 2 and 4. Keep screens singular in their purpose and that will go a long way to allowing you to place everything on one screen without the need to navigate between tabs and/or levels. You will end up with several screens, each designed for a purpose such as receiving, or IMAC, or disposal.

There are data elements that are appropriate to collect at one catch point (such as collecting the purchase order number at the receiving catch point) that are either not available, or inappropriate, to collect at other catch points. Therefore, it is important that data elements that are not applicable to the catch point at hand not be presented on a capture screen.

For instance, the serial number is typically considered an important data element in any IT asset management solution. The serial number is also a static data element and its value never changes. You should design capture of it once in the asset’s life cycle, and then protect the data element from subsequent change. However, putting serial number on a midlife cycle catch point screen (such as the IMAC screen), or putting it on a multi-purpose screen, will typically end up doing more harm than good. We have seen one such implementation be the cause of a loss of over 10% of the serial numbers being tracked as a result of this practice.

4: Only attempt to capture what the person performing the life cycle step (catch point) will reasonably know at that step

This is one of the most common causes of accuracy issues – asking someone for information they do not possess and/or asking them to collect static data that has already been collected as a means to hopefully validate existing data. Before screens are designed, follow along with a technician or person who routinely performs the given life cycle step (catch point), and validate which ITAM data elements might change as a result of their activity, and which elements will be reasonably known at that point in time. As a common example, a technician is often asked to provide the purchase order number at time of installation. However, that information is seldom available to the technician and the data field will largely be left blank, or filled in with something else.

Another common mistake is to ask technicians performing IMACs’ to pick up the serial number of the asset (in addition to its asset number) in order to get extra validation that the serial number on file is correct. What often happens is that the technician’s entry has errors (too many digits, too few digits, the “Z” is entered as a “2”, “O” as a “0”) and ends up wiping out good data. The rule with static data is to capture it once from a highly trusted source such as an Advanced Shipping Notice and then protect it from subsequent change. Technicians will appreciate fewer data elements to capture and the accuracy of the data will be maintained at a higher level.

5: Avoid requiring a password to access the screen, if possible

This is done to make the process of capturing ITAM data as easy as possible. If the technician has to remember yet another password, this acts as a deterrent to capturing the data and the result is lower compliance. Also, there are times when catch point screens are extended to the general user population such as in performing self-service inventories. Unless somehow tied into the Active Directory (or other enterprise authentication system), that would mean a tremendous number of user IDs and passwords to manage. Some sort of self-authentication scheme should be used, if possible.

6: Design capture to be as close to the actual activity as possible, both in time and place

In many companies, the process for capturing asset change data involves asking the technician to record information that has changed on an electronic or paper form, which is then emailed to ITAM to be entered into the repository by ITAM data entry clerks. The same might be the process for end users reporting ITAM changes (such as location and/or assignee and/or cost center changes). The concept is that data entry is done by ITAM staff, with information being fed in a free form, or quasi-free form, from the various people that are causing the changes.

This practice, however, leads to omissions in data and inaccuracy. It is very important to place the capture of the data, including data validation rules, as close in time and place to the change activity as possible. Capturing data in a free-form manner and forwarding via email to be entered later tends to violate both the time and place proximity consideration. Even if the data is captured in some sort of automated screen with strong validation rules enforced, the practice of presenting it for re-typing will introduce errors. Another issue that comes from attempting final data entry too far from the place of change is when a particular data element needs to be confirmed. For example, sometimes the wrong data element is captured, or what was captured does not appear in the data entry clerk’s drop-down menu requiring a re-visit to the physical asset to confirm what was captured. If the capture and final entry of the data is in close proximity to the physical asset, this re-visit is easily performed. If it is too far, either in place and/or in time, then the re-visit becomes a project and is seldom done, resulting in omissions and/or errors.

The best practice is to extend that capture of change data to the point of change, with strong validation enforced at the point of capture, and with the record being written directly to a staging area to be consolidated into the main repository (through yet another set of business rules). This eliminates the errors encountered due to double entry and it also eliminates omissions and/or inaccurate entry.

7: Use validation rules in a way that reflects the real world, but do validate

It is important to use strong validation on captured data especially if non-ITAM experts are being employed to perform capture (technicians, end users). However, it is equally important to carefully design the validation rules and how they are applied. For example, forcing a selection from a drop-down will certainly go a long way to keeping spelling standardized, however it can also introduce inaccuracy. How? Because when validation on a data element forces selection from a drop-down, and the data being entered does not appear in the drop-down list, the person performing the collection will choose the closest match. This introduces an error that is hard to detect, as the spelling will look correct and it will conform to the validation rules – yet it is incorrect. A good rule of thumb on constraining input to drop-downs is for items such as asset status, cost center (if you are certain you are keeping that list up to date), site, and name. Where you do not want to constrain to a drop-down is for product information, such as manufacturer, product description, or model. Despite the best diligence in attempting to form such drop-downs from a master product catalog, we have seen products that did not appear on the list show up in a matter of minutes when performing wall-to-wall inventories for clients. The general rule is that there are more variations of assets than expected and it is best to not attempt to constrain input to drop-downs. As an alternative, allow the user to create a new entry that will be instantly visible to all other users. This approach will allow constraining to drop-downs but will also allow expanding the drop-downs to accommodate unknown equipment.

Another consideration in designing validation is to make it agree with the real world. We are aware of one ITAM solution where the client was tracking location down to the individual cubicle using a hierarchy of Site-Building-Floor-Room-Cube. The back end automation completely enforced the relationships within the hierarchy. This meant that for the data entry validation to properly work, every site had to be populated with every building at that site, and every building had to be populated with every floor in that building, and every floor had to be populated with every room on that floor, and every room had to be populated with every cubical in the room – and it all had to be kept up to date. To the tools designer, this enforcement made sense as it protected the data from integrity and accuracy issues. However, it was wholly unusable in the real world since very few customers have staff and/or infrastructure to maintain such a database of location information down to the individual cubical. In this one example, the data collection was performed using a general purpose screen (violated design principle 3) and the location information was allowed to be entered into non-validated fields (violated design principle 6). The validation was enforced later when the record was attempted to be consolidated from the staging table into the repository. What happened was that almost every record failed location hierarchy validation and was thrown out as an exception. This meant that change data could not get into the repository and it significantly overloaded the already over-worked staff as they worked to clear all of the exceptions. At any given time, there were over 1000 exception records that needed to be researched and cleared. Exceptions were created even when the data was correct. For instance, a technician may have entered “1” for floor; however, the drop-downs in the system had “01” as the available floor for that building. As far as the system was concerned, it was not a match and was thrown as an exception.

Our advice in tracking location is to track only to the point of value. More often, when designing an ITAM solution to deliver tangible value, you discover there is no value in attempting to track location down to a specific cubicle. There is a point where granularity of tracking results in negative returns on the effort and tracking location beyond the site address is one of them. This is especially true for mobile assets that change location frequently. Beyond that, the validation must be pushed to the point of entry and not during the back-end data consolidation. Such a validation design simply invites unnecessary data exceptions that must be investigated and cleared by the ITAM staff.

8: Avoid look-ups

Another common approach is to have a technician perform a look-up of the asset(s) being changed, and then enter the changed information once the record has been found. This technique may be simpler from an automation perspective (although there are often license considerations) in that the technicians are simply given role-specific access to the main ITAM tool. This means no intermediate screens need to be built and/or data connectivity schemes designed. Also, this is exactly what the ITAM data entry clerks do to edit records. Finally, it enforces uniform validation on everyone entering information into the system.

The drawback (beyond any licensing drawbacks) is that it takes too long. Imagine being a technician performing a department move resulting in 50 IMACs. Most technicians will not spend the time to look up 50 asset records, navigate to the specific information fields that need to be changed, make the changes, and then save the records. The other drawback is that this approach tends to violate design considerations 2 and 3, presenting the technician with more information than is needed at that catch point, requiring some navigation to perform the look-up, and requiring movement to the field(s) to be changed.

Such an interface is wholly appropriate for a staff very knowledgeable in the tool like the ITAM staff, but not appropriate for individuals who perform the function as an ancillary duty. Poor results include non-compliance of the technician, data omission from the technician overlooking applicable fields because there are multiple non-applicable fields on the screen and inaccuracy from adding wrong information to those non-applicable fields.

Another approach that we often see is to have the technician perform a look up of an asset, visually comparing what is in the database to the physical asset(s), and updating the asset record(s) with any changes. This concept simply invites non-compliance as the technician will often claim that the information was correct and that nothing changed. One company began to compare the number of uses of the collection screen to the number of IMAC tickets and followed up with the technicians that did not perform the look-up and validation step. Upon our investigation, we found that the technicians would now perform the look up, but would simply claim that nothing changed (to satisfy the current way their compliance was being measured). The result was solid performance metrics, but no improvement in repository accuracy.

The best practice is to provide the technician with a simple screen that requires minimal collection of applicable data elements. It can be completed quickly without having to wait for a look-up and does not create an excuse for legitimately skipping the step because there is no data present for the technician to compare.

9: Do not present known data and ask for validation

This is builds on design principle 8 to avoiding having the technician perform a mini-inventory on each call. Sometimes the technician is given a printed or electronic form of all asset data assigned to the person whom the technician is about to visit. The technician is expected to compare the data in the print-out to what is currently in the person’s physical possession. The primary difference to principle 8 is that the technician is not being asked to perform an individual asset look-up, but the purpose of validating is the same. Our experience with this scheme is that the technician will still skip the step and report back that nothing changed. The reason is that it takes too long to perform.

Just like design principle 8, presenting a technician or end user with current data and asking for a validation invites non-compliance. It is better to simplify and minimize the data elements that need to be collected out of respect for the collector’s time and ask for those elements to be sourced directly.

10: Avoid paper and/or email

This is probably the most obvious of the design principles, and yet we continue to see paper and/or email as accepted channels for asset change data input. Such capture tends to lack embedded validation (mostly free form) and requires entry by someone other than the person who captured the data and at a time and place that is often far from where the change occurred. Such systems are used as they are simple to implement, but the price paid is increasingly poor data quality as more and more data is sourced through these channels.

This article presented information based on over 20+ years in this space about what works and what does not. It presented the concept of data elements falling within three specific classes that have specific implications for the capture of those data elements as well as presented ten important principles to consider when designing asset change data capture screens.

While this article touched briefly on the use of hand-held scanners in data capture, the recommended methodology for designing capture using scanners is significantly different from computer-hosted screens and was not covered in this article.

About the Author

Brett Husselbaugh

Brett Husselbaugh is the President of ETelligent Solutions, Inc.