The Metrics are Lying
I was talking to a friend the other day about how her management is tracking teams’ “maturity” using “DORA Metrics”, the four measures the correlate with high-performing organizations mentioned by DevOps Research and Assessment in “Accelerate” and the State of DevOps reports.
- Pipeline cycle time (Hard lead time in the book): The time between when a change is committed to version control and when the change delivers to production.
- Delivery frequency: How frequently changes are made to production.
- Change fail %: The percentage of changes that require remediation.
- Mean time to repair (MTTR): The average time required to restore service.
These are becoming very popular in the industry and for good reason; they are good indicators for the health of a delivery process. High-performing teams can deliver changes daily with pipeline cycle times of less than an hour. They have an MTTR measured in minutes and less than 15% of their changes require remediation. So, all we need to do is tell all teams that their goal is to achieve these metrics and evaluate the teams’ “maturity” using these metrics?
We’ve just created another scaling framework silver bullet. Silver bullets have one use case.
Since we are not hunting werewolves on the moors, we need to find tools to solve the problems we have. The trouble is that people hear about the “DORA metrics” and how they correlate to high-performing teams and stop there. Correlation is not causation. Excluding acknowledgments and the index, Accelerate is 229 pages. The metrics are in a table on page 19. Imagine what other useful information the other 228 pages might contain.
Why do high-performing teams have these results?
They care about what they are doing, they understand the problem they are trying to solve, they make decisions about how to solve it, they have responsibility for the business outcomes of their decisions, and they have pride in the results.
They execute a continuous integration development flow with tiny, production-ready changes integrating to the trunk very frequently (no, more frequently than that). When things break in production, they harden their delivery process. When the delivery process takes too long, they make it more efficient.
They exist in an organization that is optimized for the flow of communication, not for internal kingdoms. Budgets are planned for product lifecycles and value delivery, not annually planned buckets with a “use it or lose it” structure that is optimized to make life easy for accountants.
Their organization focuses on learning from success and failure. Sharing learning from both is seen as a priority. Why did it succeed? Why did it fail? People are encouraged to speak up when they have an idea, when they think an idea needs improvement, or when they know something is broken.
So, do we throw away the DORA metrics? No, they help us keep an eye on the health of the system. However, they are trailing indicators for poor health, not indicators everything is going well. That’s why “Accelerate” mentions several leading indicators as well (I won’t spoil it for you). We can push teams to improve those metrics and can even get short-term wins that will get people promoted. However, if we are not tracking leading indicators for efficiency and team morale, organizing for the flow of value, and fostering the right organizational culture, then those gains will be lost sooner than later.
Improving outcomes is hard. Read the rest of the book… twice.
Written on March 28, 2021 by Bryan Finster.
Originally published on Medium