Ideally, you want to operate on all three levels of metric frameworks. And clarify them from the outcomes back down to the resource utilization. It should be clear what the stakeholders and goals are, so that you can start building things. Once you do that, make sure that your team has clear output targets (and you keep the releases as small as you can). And only then do you start looking at the feature factory efficiency measures.
The front page of Street Technologies' expensive 8.5 x 11" corporate opus posed the following challenge: "How to eliminate half your work force."The inside of the brochure provided the means to rise to the task: "Get the other half to use your software!" When it was pointed out to the president of Street Technologies that a marketing campaign designed to create mass unemployment and spark a brutal Darwinian struggle for personal survival in its target audience might not be the most effective of all possible approaches, he airily dismissed the issue saying "the piece was not aimed at the employees but the bosses." He'd apparently not considered the issue of who was going to be opening the mail.
As a somewhat lurid deep dive into the history of Silicon Valley, Merrill Chapman's In Search of Stupidity argues quite convincingly that most high growth tech companies fail for non-technical reasons. Marketing mis-steps, mostly.
More or less from the beginning of time (or the 1970s to be exact), most of the relatively big names in Silicon valley dropped out of the limelight one by one, usually because of product marketing mistakes. For example, the stories of former Valley darlings Digital Research, MicroPro, Ashton Tate, Borland, Novell, Netscape make it pretty clear that there could be a pattern, even though these are just anecdotes.
He also includes a number of mistakes of technology companies that still exist, having been able to recover from early stumbles. In effect, he argues that companies like Apple, Microsoft, IBM, and Intel succeeded despite their bad marketing decision, having been able to minimize the downside somehow or turn the situation around. Overall, it's an eye-opening and entertaining read, if not a bit pessimistic about the reality of what happens in technology companies.
Usually, when discussing this with friends, this discussion goes in the direction of citing a Promethean conflict between the "geeks" and the "suits". Two different worldviews. And usually two different departments, not speaking or communicating with one another.
In other words--organizational silos. Ones that start appearing as a startup aim to scale up. And become an established business. This is a pretty serious challenge a lot of companies face. And often, there are vague terms like "culture" thrown around, but it hasn't really been clear how this works.
The Category of Operating Metrics You Pay Attention to Dictates Your Ability to Execute, Not the Specific Metrics
After giving it some thought, I think there is a secret hidden in plain sight, as to why this happens. It's the metric framework used to "manage" the initiative. These incentivize the entire initiative. And they can easily incentivize the wrong behaviors and drive the delivery and executive team's attention to the wrong things. Let me be clear, it's worthwhile to think through your objectives, key results, and metrics to implement your strategy. That usually takes a lot of honesty, courage, and self-awareness as an organization if you can do it. Depending on how far you get in this process, you'll end up with an approach that primarily rests on one of the three categories:
Usually, if you are operating from the perspective of one of these, all of the other metrics you use will be used together with figuring out the implications for that. They are each a different system of metrics, with different underlying assumptions. About what needs to be monitored and trusted. And who, if anyone. Almost completely different philosophies, even though they are all intended to do roughly the same thing: help managers run efficient and effective companies that implement the organization's strategy.
Let's dig a bit deeper on what these actually mean in practice, shall we?
There's a guy I used to work with a lot, who had moved through the ranks and became a development manager. Let's call him Ted. Every time I'd walk over to Ted's desk or speak with him at the pub, he'd say he's busy. "Keeping busy". "We're really busy," he'd say of his team. It was almost a badge of honor, a way for him to try to earn respect from anyone who spoke to him.
In practice, it also meant that anyone who wanted to give him more to do started doubting if it made sense to do so. If he and his team are already busy, then they won't have much capacity to solve the thing you wanted to ask about. So you go ask someone else. Note that he'd learned over time, that it's best to say and look like he's busy, regardless of whether that was really true.
He was and is a great guy to have beer with. So I was puzzled by his behavior. And I realized that it reflected the management system in place.
If resource utilization was one of the main numbers executives asked about, then it made sense that "the management system" expected him and his team to be busy. Because that is the underlying assumption of effectiveness in this context: if all workers are at or near their capacity, then the company is correctly resourced. So capital is being used effectively.
This approach could arguably work, if the following were true:
- Every unit of output needs to adhere to a well-defined standard.
- What needs to be done is well understood, as it's been done many times before
- The staff can be quickly trained up, and therefore are "fungible".
Like in a factory. The classification for unskilled vs skilled manufacturing labor is half a day of training. If you can get someone up to speed within 4 hours, they are considered unskilled, and therefore don't cost much to replace. If they are skilled it takes longer, however, you can assign responsibility to people relatively quickly to new hires. This mindset, or set of metrics, envisions the company operating as a factory.
It's a useful analogy if you are primarily concerned with optimizing worker output, as Frederic Winslow Taylor was. The argument goes that the whole company or department is efficient, when its sub-parts are efficient. And its sub-parts are efficient. By squeezing as much as you can out of the individual resources you have, you achieve the best possible outcome. Or so it seems.
The truth is that this mindset causes a number of downsides when applied to knowledge work. It's paradoxically very difficult to hold any individual accountable in this context, even though this approach purportedly focuses on keeping individuals efficient. For example, there is a lot of effort required to coddle, coax, and push workers to actually do the work. Because the whole system assumes that some percentage of them aren't working, and it's the manager's responsibility to deploy his resources.
Managing individuals' time becomes the unit of account. So for example, you need to keep track of things like remaining total work days before a release. Or comparing units of resources that are being allocated, and comparing it to the actual contribution of the individual.
In my opinion, only tracking utilization can easily breed a distrust of line workers at the coal face. Because they will usually behave differently that what is in the top-down plan for what they are supposed to do. But this is denying one of the critical elements of human nature: the knowing-doing gap. Just because people know what is expected of them, and their livelihood depends on it, doesn't mean they will act accordingly. Classic books like Daniel Pink's Drive have documented this in detail, trying to popularize what has been known in academic circles since the 1960s.
In turn, at a managerial level, this type of environment makes it seemingly easy to multi-task, by allocating resources to different projects at the same time. You move resources around on a spreadsheet. As a short term workaround, or to address various urgent issues, you allocate resources to multiple projects. Often where they need to juggle multiple tasks, expectations and often taskmasters. So if there are any unresolved conflicts at a more strategic level, you can ask any given employee to spend 20% of his time on this new project, and 20% on another one. Which means this also needs to be monitored and accounted for. And usually, the problems end up being blamed on the workers, instead of the fact they are being served unclear expectations.
Paradoxically, because there is little focus on finishing and delivering knowledge work. Lots of multi-tasking results in having lots of planes in the air and not landing. At the same time, all of your resources are maxed out. So they really don't have much capacity to change what they're doing or to add to their own workload. Accountability is tough to enforce. The available resources are over-committed already--by design.
So what's the alternative? It turns out "managing the unfinished work, and not the workers" is a better approach, as Don Reinertsen posits.
The main problem with primarily focusing on resource utilization is that it's inversely related to output of most systems over and above a certain threshold value. This is an attribute of complex systems. If you've ever had a slow laptop, usually this is because one of the key resources its uses is maxxed out. Or consistently close to the 100% mark. So either you need to add more or better resources. Or you need to change how the system works in the first place and remove the bottleneck. This is just as true within entire companies--as it is on your laptop.
The above video is a really powerful demonstration of this principle. Having a lot of balls in the air and a number of people to manage it isn't really a good use of resources. This is what happens if you take the utilization approach too seriously to heart. Conversely, you can't completely ignore utilization either, as you can end up with problems like systemic under-staffing or individual performance issues. It's just that the primary framework to evaluate performance is in the overall output.
This is an initial step in the direction of "beginning with the end in mind". You have a goal. In order to achieve that goal, you need a certain amount of output. This can be measured in physical or digital terms (stories and story points). Your aim is to design a system that maximizes the rate of output. Or the change in output over the change in time. In high school calculus terms:
Or velocity. Or time between releases, as Steve Jobs pointed out. It's a powerful metric that has major implications for any project or initiative, as discussed here.
The subtle shift here is that you start looking at your company as a system designed to produce output. And that system is run by delivery teams at the lowest level, not individuals. High output is achieved by finishing quickly. Or finishing what you start, not long after you start it. So there are usually a number of skill sets and work processing stages involved for a specific unit of output. So you start looking at how each unit of eventual output goes from the very beginning to the very end, and measuring the elapsed time it takes to achieve that. Once you do this, you rapidly realize that the units of output spend the majority of their time in-between processing states. So even if the workers are very efficient at doing their job at any given intermediate stage, it's largely irrelevant. Because the unfinished output spends disproportionately more time between stages. Or individual workers, each of which have different skill sets and talents they bring to the work.
If you do shift to thinking about time to completion for individual units of work, everything else becomes easier or unnecessary. Because you can quickly complete anything you set out to do, you become unstoppable. If no one else is doing this, you will quickly become a leader in your industry. You stop fearing competition, because you can out-execute them once your edge is big enough.
As a side effect of focusing on the overall system, you'll start to see a greater emergence of teamwork. Because you aren't really that interested in measuring individuals you shift the emphasis and incentives. After the initial gains of moving towards this approach, further gains in output will come from teamwork. How quickly a unit of work is passed through each stage and type of expertise needed to complete it. Plus it's a fun environment to work in. People aren't just punching the clock any more. They acknowledge their interdependence with other members of the team. And you can see the effect of team dynamics on how much elapsed time is wasted in between steps--for example between development and QA.
Eventually, this approach starts to break down, though, too. Especially in a digital context. You start to hit a wall once you fully acknowledge that cognitive effort and business value are independent. Sometimes, something hard to make is valuable, Sometimes it isn't. And vice versa. The best example of this disconnect is the most common reason for new product failure: building something nobody wants. In other words, you can construct a high output system which spends a lot of time building irrelevant or un-finishable things.
Together with this, consider that especially in a digital context, not all units of output matter equally. For example in software, if you build and release one highly relevant feature that directly solves a big problem for your customers, it can have a lot of value. To keep it simple, let's say a big B2B customer has 45% cost savings of some sort by implementing your one-feature solution. If you then go and build another big feature--if you did prioritize the work correctly--the next feature down will have less of an impact for the same customer. So it will have less value, even though it requires roughly the same amount of work.
At its core, if you are truly producing units of output frequently, this is why "scope creep" doesn't matter as much when you are focused on velocity. Yes, there are a lot of things you could do. And this increases. But the individual stories or features will have different levels of business value. If you release as early as possible and not when everything is ready, you are in a better place to start having sales discussions earlier. This is the whole selling point of focusing on completion instead of utilization.
So the real point of productivity isn't really to finish lots of output. The real point is to be able to achieve what you care about quickly. But then it means you need to be clear on the outcomes you're trying to achieve in the first place. And I mean crystal clear. Usually this isn't in place, so everyone just defaults to focusing on outputs. Or resources.
It will vary a lot by which person you ask in a new product development environment. I think it can be a bit overwhelming for people who are totally new at this, but eventually it can be twisted into something useful. I find that it will be used in a different context than it is originally prototyped.
After all, new product development is messy. When you first start on anything that's truly new, you don't have a clue on what success actually looks like. You have a lot of hopes and some guesses, but that's probably it. Pretending it's not only makes it a lot harder.
Beyond that, you need to go and speak to customers in greater depth to figure out what they look like and what they care about.There are many tools that help you investigate this whole space. Teresa Torres (@ttorres) has her outcome based trees to help explore a given space where you're totally new. Gojko Adzic (@gojkoadzic) came up with the impact map concept, where you brainstorm outcomes that each project stakeholder care about. And then map them together in a workshop format ideally.
Ultimately, these outcomes are tied to the value proposition box on the Business Model Canvas you are working with. These are the benefits that your prospects will care about. So how you do it (in terms of solution) is not that important--to them. They only care about the outcome. They "hire" your product to achieve that outcome. In the same way that you have a car to get from point a to point b. Most people won't care about the engine or horsepower or anything else which is specific to building a car from scratch. This is because it's a relatively mature market and the technology has been well understood for a while.
For example, recently I was reading a post-mortem of a failed HR software startup. In short, there were a lot of reasons why they struggled, but one item in particular stuck out to me:
The problem we set out to tackle, reducing turnover and improving new hire performance, was felt acutely by our buyers (CHROs / Heads of Talent). It was NOT felt acutely by our users (recruiters). Recruiters are measured against average time-to-hire, not new hire retention or performance. We were trying to improve outcomes about which users would pay lip service, but which didn’t impact their paychecks or promotions.
They had focused on the wrong metric (outcome) because they had focused on the buyer's needs, and not on the user's needs. The buyer just wants an overall outcome. The users just want to be able to move through the software quickly. It's a case of building a product that achieved the wrong thing. In other words, they'd built the wrong features,or at least insufficient features to be able to really get the result that the customer wanted.
In this case, it was especially difficult because it was a B2B sale with multiple stakeholders at a client site. But ultimately, the number of story points and velocity didn't matter, because they didn't focus on the right value propositions and metrics that the prospects cared about. So their product failed.
And it's really a mistake to think that these are separate things (effect from delivery process). It's tempting to just say they are in different silos at a company and keep them separate. But you'll get more mileage by making sure that, for example, the developers understand the customer's pain, so that they will be able to take any customer constraints into consideration as they build and prototype the solution. If you just think it's a question of burning down enough story points, you might fail in the market even though you had pretty burndown chart showing how stories are being completed. Which is irrelevant and waste of people's lives and your company's resources to boot.
But When I'm Starting Out, I Don't Know What Outcomes Matter...
That's exactly why it's a mistake to focus on primarily utilization or even output metrics, until you have better validation and structure around what matters. And what you want to achieve. Even if you think you need a lot of "building" to happen first. It's possible to try working on all three levels with your team. If you are keen on timing the release relative to other things going on, that may be a smart move.
If you do have the money to start out at a big scale and you don't mind losing it, then that's ok. In the startup world, this is called "premature scaling". Even though budgets need to be large to show a big company takes a new initiative seriously, be aware that you aren't likely to be 100% right straight out of the gate. Amar Bhide reported that 2/3 of the Inc 500 company founders he interviews pivoted away from their original concept:
More that one third of the Inc 500 founders we interviewed significantly altered their initial concepts, and another third reported moderate changes.
In other words, it's best to assume that you will be wrong about something when launching a new product, and make sure you have the option to pivot.
But the certain way you limit costs is to not allocate the spending in the first place. You'll need to have the ability to scale up if you realize you're on the right path. But don't assume that you are omniscient about the future
So That's It in a Nutshell
Ideally, you want to operate on all three levels of metric frameworks. And clarify them from the outcomes back down to the resource utilization.
- It should be clear what the stakeholders and goals are, so that you can start building things.
- Once you do that, make sure that your team has clear output targets (and you keep the releases as small as you can).
- And only then do you start looking at the feature factory efficiency measures.
Usually in established companies, they start with the feature factory approach, because that's what they know and understand well. But then it means they are like a babe in the woods with their new product efforts.
Also to summarize, if you have an end-to-end view, ideally including sales and marketing from the beginning, it helps significantly to avoid the geeks vs suits nonsense which happens when silos end up being created. This is difficult to do as a "coordination" effort, especially from the top by busy and often strung out executives perceived to be dangerous by everyone below them.
Make your new product effort like an internal startup effort and go from there. Assume you only did have a handful of people to go launch something new, and have them figure it out. Then they actually have a chance to succeed at this game and they can move quickly like a startup yet benefit from the resources the enterprise provides.