I like the intention of this OKR: improving team capacity. I’m keen that every cycle includes one OKR like this, one that aims to enhance the team.
Similarly I like the sentiment of the key results: increased velocity, reduced defects, reduced time on reviews and blocks.
All the key results pass the “quick sniff test” - they all contain a number. However, despite as written these key results are weak.
Increase velocity, KR #1, is a textbook example of what not to do. I say textbook, it might even me in my book. As it stands the team will meet this key result but will probably not meet the objective. Instead they will induce “velocity inflation”, team members will meet this by consciously, or subconsciously, increasing their effort estimates.
Velocity is a measure of output, it is information. Using it as a target will invoke Goodhart’s Law and behaviour will change.
Turning to, KR#2, “Reduce bugs per feature from 1.7 to 1 (average)”. Part of me wonders why the team aren’t more ambitious, why not zero bugs per feature? It would almost be easier to hit zero than 1.
My real concern here is: “how do you measure bug?” and “what is a bug?”
Are bugs developers find in their own unit testing counted?
Is this a bug reported by system testers? UAT testers?
I would prefer to write “Bugs which escape the sprint”, i.e. defects reported in work previously considered done. That would, of course, mean that bug found in the same sprint it was code in wouldn’t count but I’m happy with that.
One might say “Bugs in features released to live”.
One worry he is that one way of meet this would be to slow down deliveries or undertake on more testing.
As presented this KR is too vague.
KR #3, “Time spent on code reviews reduced by 30%.” Again this is vaguely defined, is that 30% less time spent in review or 30% less time waiting for review?
If this is “30% less time spent in review” it is easy to meet: reviewers can work more quickly, review less comprehensively, or simply wave more code through. But I would assume more problems would slip through review, and that would make KR #2 more difficult to achieve.
A goal of 30% less time waiting for review is brilliant, such delays can cause all sorts of problems. However, we need to be clear when the clock starts ticking on the wait, and when the clock stops.
Finally, KR #4, reduce time blocked. Hopefully the team has a robust system in place for measuring blocked time, probably their electronic tracking system can measure.
I might wish that the team as more ambitious here, why not 10 or 20%? But I don’t know about the problems the team face, 5% might be a very difficult target for them.
My worry here, and again this is a challenge the team might be up for taking on, is: most blocks occur outside the team. That means the team has little influence over them. Still, I won’t critize them for trying.
These aren’t big caveats and in general I’m happy with this KR. In fact, of the 4 KRs here this is the only one I wouldn’t want to revisit.