Measuring Asset Performance – Adding Value through Metrics

By Dan Abell

What is IT Asset Management overlooking? IT Asset Management has traditionally focused on hardware and software inventory management. Software entitlement, license compliance, hardware and software maintenance management; as well as vendor management are all core competences practiced by ITAM. However, Asset Management should also be looking at asset performance, not unlike the “Enterprise Asset Management” function for traditional capital equipment.

From Wikipedia:

Enterprise asset management (EAM) means the whole life optimal management of the physical assets of an organization to maximize value. It covers such things as the design, construction, commissioning, operations, maintenance and decommissioning/replacement of plant, equipment and facilities. “Enterprise” refers to the management of the assets across departments, locations, facilities and, in some cases, business units. By managing assets across the facility, organizations can improve utilization and performance, reduce capital costs, reduce asset-related operating costs, extend asset life and subsequently improve ROA (return on assets).

Looking at the last sentence in the definition the key phrases are:

  • Improve utilization and performance
  • Reduce capital costs
  • Reduce operations costs

So can, or should, the IT asset management function become engaged in these areas, and is it even relevant for IT? In most organizations the largest capital investments are made in IT, so should we be seeking ways to better actively manage the largest capital investment portfolio of the enterprise?

Do you have a way to measure ROA? I bet not. What about utilization and performance? Utilization and performance has always been measured by the technical side of the IT organization, if it measured and tracked at all. In the past few years the move toward virtualization has renewed interest and focus on utilization and performance metrics. Many IT shops have historically tracked these metrics, but now the virtualization movement has elevated their visibility, especially those assets that have low utilization rates, making them prime candidates for virtualization.

But what about the business side of IT that you know as the IT finance function? Where has the IT finance function been? The answer is few if any finance groups tied asset utilization to the effective cost of delivery of services. There has been little or no attempt to specifically measure asset performance. It does seem appropriate that the IT Asset Management function attempt to measure the IT asset portfolio performance rather than simply maintaining the inventory.

Getting Started

Where to start? Well, what data is available? Asset Management has an inventory of all of the acquired hardware and software for IT. So, we know what is installed, what is staged, spared, and in storage; but do we have any idea how effectively is it being utilized? Clearly there are multiple ways to look at utilization with some more appropriate and value-based than others. However, the point of this article is that you should explore ways to measure asset performance in business/financial terms.

How about looking at servers and storage? Server utilization by individual device is typically available and captured in many IT shops. While an individual server might be too granular, one can begin to understand a utilization profile by aggregating servers, for example, used to support a functional area such as customer service, or production planning. Using acquisition cost or TCO, one can begin to develop insight into actual cost to deliver in a service area (in this case only the hardware costs). Using measurements like the average utilization rates as well as absolute peak rate helps to further understand how well this asset class is performing. Clearly, there are shortcomings with this approach but it’s better than what you have – nothing.

By initiating a process to collect and measure this data, one can begin to understand asset performance; that is, what is the cost per unit delivered? Are there opportunities to improve the fiscal performance of the IT asset portfolio? Are we under invested in certain business functional area? With basic metrics and historical views, one can at least begin to ask the right questions. Over investing in hardware is more likely to go unnoticed that under investing since under investing manifests itself as complaints from the user community.

Storage represents another area to easily initiate some level asset performance measurement. One can start out with a simple approach and measure total capacity versus current consumption. Looking at growth rates, peaks, access frequencies, etc., one can begin to develop the financial performance, or at least cost of storage. For example, SAN with 10% of the storage actually used will result in an effective “cost per gigabyte” much higher than one that is at 80% of capacity. Like many metric schemes, it’s more important to see how these metrics change over time than at a point in time. Clearly, a SAN that has only a small percentage of the capacity populated will result is a relatively high cost per actual stored gigabyte. This may not be an issue, but if it’s been in such a state for six months, then that might be a different story. Putting in place a process to consistently measure performance across time and relating that to total cost of the asset (TCO) appears to have some value. Looking at the initial investment, ongoing costs, and actual utilization, along with services delivered, can provide additional perspective and transparency.

Some aspects of this approach may have already been captured in ROI calculations that your organization has performed in analyzing virtualization strategies, cloud strategies, or new systems investment. However, these are static, point in time values, rather than an ongoing metric system approach suggested here.

So what does it take to initiate such an effort? First, ITAM already has an inventory of the assets. Many organizations already capture and store performance utilization data, but most likely in a different system maintained by the tech organization. It is relatively easy to join this data using, say serial number, as the common identifier among systems. With the merging of performance data, you can begin to gain some insight into asset utilization. It may very well be that the tech systems also have historical data, which will enable you to immediately gain insights into trends and anomalies, such as idle assets or even assets that have been uninstalled. This step is only the beginning. You may want to take a very small subset of data, using spreadsheets to explore the potential of this model, and work out the idiosyncrasies before pursuing the full dataset.

Chargeback systems may also be a source some data, or aggregation schemes to enable rollups by organizational unit, but the “cost” data is typically not true cost and should not be used to determine asset performance as envisioned in this article.


The point of this article is not to provide specific metrics, or even to provide a solid framework for measuring asset portfolio performance, but rather to highlight that IT Asset Management should be exploring ways to measure asset performance to provide another perspective. Some of the possible outcomes using asset performance include:

  • Transparency for both business and IT
  • Additional perspective for cloud, virtualization, and other investment decisions
  • Improved operational management
  • Identification of under or over investment in capacity
  • Improved visibility of IT assets and how they are used
  • Decision support

IT Asset Management practitioners must continue to seek ways to improve the value contribution to the greater enterprise and to improve business transparency of IT. Developing metrics is only half of the challenge, the interpretation of the results is equally challenging. But such metrics do provide the basis for improved decision making both at the operational and strategic level. For this reason, alone it’s worth pursuing an approach to measure IT asset performance.

About the Author

Dan Abell is the Vice President of KodiakEDGE.