Some quality metrics that helped my teams
I’ve been asked the question “what are the best metrics to improve software quality?” (or similar) a million times, this blog post is a selfish time saver, you are probably reading this because you asked me a similar question and I sent you here.
Firstly, i am not a fan of metrics and I consider a good 99% of the recommended software quality metrics pure rubbish. Having said that there are a few metrics that have helped teams I worked with and these are the ones I will share.
Secondly, metrics should be used to drive change. I believe it is fundamental that the metric tracked is clearly associated to the reason why the metric is tracked so that people don’t focus on the number but on the benefit that observing the number will drive.
Good metric#1: In order to be able to re-factor without worrying about breaking what we have already built we decided to raise the unit test coverage to >95% and measure it. Builds would fail if the metric was not respected.
Good metric#2: In order to reduce code complexity, improve readability and make changes easier, we set a limit and measured the maximum size of each method (15 lines) and the cyclomatic complexity (don’t remember the number but I think it was <10). Builds would fail if the metric was not respected.
Good metric#3: In order to continuously deliver low complexity easily testable units of work and help with predictability we started measuring the full cycle time of user stories from inception to production with the goal of keeping it between 3 and 5 days. When we had user stories that took more than 5 days we retrospected and examined the reasons.
In the 3 cases above, the focus is on the goal, the number is what we think will drive the change and can always be changed.
If people don’t understand why they write unit tests, they will achieve unit test coverage without guaranteeing the ability to refactor, for example by writing fake tests that don’t have assertions. We should never decouple the metric from the reason we are measuring something.
These are the good metrics, for me. If you want to see some of the bad ones, have a look at this article I wrote some time ago on confrontational metrics and delivery teams that don’t give a damn about their customers.
http://mysoftwarequality.wordpress.com/2012/12/27/the-wrath-of-the-mighty-metric/
Reference: | Some quality metrics that helped my teams from our JCG partner Augusto Evangelisti at the mysoftwarequality blog. |
Some interesting metrics. I think I am going to look at implementing them or something like them in my development and see how the other members of the team think.
I like the idea of the size of Use Stories. Good design up front can often prevent these. But if a User Story takes that long to implement, then maybe the design needs to be re thought and you can break that User Story up into smaller stories by examining separation of concerns.
Thanks… am bookmarking this article :)