Hierarchical thinking makes our western minds feel comfortable. When looking at the messy, apparent randomness of the world, we find ourselves searching for core truths that, when understood, make everything seem clean and logical. We want principles that fork into sub-principals like tree roots. Sometimes this impulse pays off big. Science and medicine are rewards for breaking phenomena down into parts and sub-parts whose behavior can be explained and predicted. In the industrial age, companies thrived in the form of hierarchical structures in which the role of each low-level worker is explicitly defined by managers sitting at the top. In the industrial age, the boundary between human and machine became blurred.
While humans continue to extract deep value from hierarchical thought, our reductive tendencies can go overboard. Venkatesh Rao has explored this notion through the concept of “legibility.” To satisfy our deep desire for the world to be legible, we impose our own structures on behaviors we don’t understand. As discussed in James C. Scott’s book, Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed, this tendency has led to disastrous consequences in the realm of statecraft. The U.S. government’s attempt to create order in the middle east illustrates the point. Rao summarizes the recipe for legibility-based failure:
- Look at a complex and confusing reality, such as the social dynamics of an old city
- Fail to understand all the subtleties of how the complex reality works
- Attribute that failure to the irrationality of what you are looking at, rather than your own limitations
- Come up with an idealized blank-slate vision of what that reality ought to look like
- Argue that the relative simplicity and platonic orderliness of the vision represents rationality
- Use authoritarian power to impose that vision, by demolishing the old reality if necessary
- Watch your rational Utopia fail horribly
So when should we apply hierarchical thinking and when should we avoid it? This question is especially puzzling in the realm of software development. Rooted in computer science and analytical modes of thought, software has internal logic that can be well understood. You can solve big problems by breaking them down to smaller problems.
Yet we’ve seen the desire for legibility get software companies into trouble. A primary example is the waterfall model of development. It’s a lot easier to run a business when the outcome, cost, and timeframe of each project is clear. With the waterfall model, companies try to achieve certainty through completely defining requirements upfront and estimating the corollary engineering work. While the outcome of running a line of code can be predicted, the industry has learned that the outcomes of complex software projects, even with heavy analysis, cannot be. Nobody can predict upfront the exact set of human or technical challenges. Waterfall projects lead companies to spend fortunes on failed software.
With the advent of Agile, we’ve seen companies trade in legibility for results. Learning and course adjustments are baked iteratively into the agile process. In the agile model, companies are less certain about exactly what they are building upfront. This can be hard for some people to accept, but they can be more certain that value will come out the other end.
The venture capital model provides another example of choosing results over legibility. In venture capital, it’s understood that, to discover how software can transform an industry, the most successful approach is not to conceive and flesh out the one big idea. Instead, the winning approach is to make many bets and let time be the judge. We don’t need to know why company X will hit a home run while similar company Y will strike out. We just someone to hit the home run.
While agile and venture capital demonstrate that the software industry is learning how to manage uncertainty, companies still try to exert too much control over the future unfolding of their products. Product managers are commonly expected to create “product roadmaps.” The roadmap metaphor suggests that teams are driving down a defined road that will lead to certain projects, like passing by gas stations. There can be forks in the road, but since you have a roadmap, you know in advance what you’ll get in each direction.
It’s ironic that the same companies who embrace agile create product roadmaps offering static views of the future. In my experience as a product manager, roadmaps are bullshit. They are created to make the company at large feel comfortable with where the product is going. Roadmaps make the product evolution legible. But inevitably, the experience of each project completely changes the trajectory. Things you thought were important aren’t. Things you previously neglected are vital. Things that you thought would be technically easy were hard, and vice versa. The unfolding of new knowledge does not just dictate which road on the map to take, it renders the map meaningless. I concede that, in the case of mature software with a long backlog of clearly valuable feature development, roadmaps have value, especially in managing the expectations of external clients. But if you’re trying to build something truly new, whether you are a big company or startup, roadmaps create the illusion of certainty at the detriment of success. If the PM can tear up and regenerate their roadmap at each iteration, they are relatively innocuous. But if teams are actually held to their roadmaps, they’re toxic. Product builders must be given the freedom to be maximally adaptive.
So if we don’t do roadmaps, what do we do instead? I’m not entirely sure yet. In my current company, I just say what we’re doing next and keep a list of all the things we could be doing but aren’t. The collective imagination of a company should be captured, just not with flawed forecasts for dreams becoming real.