DevOps

The importance of measuring your IT operations

How could one tell if an IT organisation is working well? A simple measure could be to check the delivered business functionalities versus the requested ones; if IT delivered all that was requested by the business one could assume that the IT organisation works well; viceversa if IT didn’t deliver the majority/totality of what was requested it seems clear that it’s not working well. However the number of actual versus projected deliverables is a rather superficial way to measure the effectiveness of an IT department; what measures the effectiveness of an IT organisation is the answer to the following question: “Is the IT department under consideration performing at its best?”. Suddendly the number of deliverables loses significance; how does one measure whether an IT organisation is performing at its best? I believe the answer lies in the accurate measurement of IT Operations (IT OPS). In fact only thorough data collection allows an IT organisation to answer two simple questions:
  • How is IT performing? (Where are we at now?)
  • How would we like IT to perform? (Where do we want to go?)
Regardless of the IT methodology used, the answer to the two questions above requires meticolous measurement of IT OPS.
The challenge now is to identify the metrics that need to be measured in order to answer the two questions above. In this first article I’m just going to suggest some of them, while in subsequent articles I’ll attempt to describe each one of them in more detail and provide a framework on how one could go about measuring them.
I think that the following are fundamental metrics in IT OPS:
  • Feature Average Lead Time (FALT). This is the average number of days it takes for a request, from the moment was originated, to hit production.
  • Development Cost for a Deployed Feature (DECODEF). This is the development cost of getting a feature deployed into production. It comes down to bare people cost.
  • Keep The Lights On (KTLO) cost for an information system. This is the infrastructure cost for a particular service.
  • Number and Cost of Production Bugs after a release (NCPB). This measure is meant to provide both a figure around quality and a figure around quantity.
  • Cost of mainteining Legacy Systems (COLS). This is the people cost in interacting with Legacy systems for issue investigation, small enhancements, production bugs
  • Cost of Evergreening (COE). This is the cost of keeping the various technologies to a version which is supported by third party vendors.
  • Business Value of each Deliverable to production (BUVD). This is perhaphs the most difficult figure to get since it’s not an exact science (how would one quantify the business value of Evergreening?)
Why is measuring IT OPS so important? Well, my answer is that while deliverables stats tell you whether your IT is delivering, IT OPS lay the foundation for a Continuous Improvement culture and it’s only through continuous improvement that an IT organisation can reach operational excellence.
In subsequent articles I’ll try to delve in detail on each IT OPS measurement; I will try to provide a framework that can be used as is or customised to obtain sensisble figures around the above measurements.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Back to top button