Let’s start with the easiest – the results. Most of what we think of when we discuss measuring a process are output metrics – what was the cost of the operation, how much did we produce, etc. The usual path to gaining control of a process is to first define process output metrics, then automate the process. This will ensure that management can see when a process is not performing, but this is not a complete solution.
If you only measure the output of a function, you can’t predict defects and you’ll build in a certain amount of waste. Measuring output alone will keep you tail-chasing process performance. Shigeo Shingo said as much in decrying the use of SPC (statistical process control) tools: “...control charts only help maintain the accepted defect rate -- they cannot reduce defects to zero.” [Shingo, Zero Quality Control: Source Inspection and the Poka Yoke System, 1986] This is not to say that output metrics are not important. Oftentimes, they’re the only tool we have to diagnose a problem – especially in the beginning phase of a project. Our challenge is to find the right set of metrics - output metrics by definition are lagging indicators.
As we gain experience with the process, we need to find ways of understanding performance in a more timely and nuanced way so that we may be able to predict performance issues and analyze process misses. Performance is often a result of the complexity of the inputs to the process – the mix of claims processed, the sensitivity of raw materials to humidity, more sales proposals to FDA-regulated customers, for example. We need to divide up our process using input metrics - think of things like volume per employee, output by product type, etc. The inputs to a process give us the metric categories and denominators that help us understand complexity. They help us compare apples to apples.
We still need to cover in-process metrics, and I want to devote enough space to it, so we’ll tackle that next week.
Next: How to define the right metrics for ‘actionable’ data