The Fox-Hedgehog Collaboration Loop

The dichotomy of the fox and the hedgehog is fascinating, in part, because it gets at a paradox we face in trying to achieve our goals. On the one hand, we need to be monomaniacal hedgehogs to relentlessly execute a vision. On the other hand, we need the fox’s open-mindedness and sound judgment to fit our plans with an uncertain environment.

Hedgehogs and foxes are often pitted against each other in competition. In this post, however, I’m going to propose a model for how the two mentalities can work together, intertwined in the creative process. The ability to strategically mix the hedgehog and fox cognitive styles, either on a team or within your skull, is critical for innovation.

The Fox-Hedgehog Metaphor

The fox-hedgehog philosophy started with the words of Archilochus, an ancient greek poet.

“a fox knows many things, but a hedgehog one important thing”

– Archilochus

In a fight, the cunning fox deploys a diversity of strategies while the hedgehog has one defense, curling up into a ball. Philosopher Isaiah Berlin unleashes the metaphoric power in his essay The Hedgehog and the Fox.

Berlin describes how hedgehogs “… relate everything to a single central vision, one system, less or more coherent or articulate, in terms of which they understand, think and feel – a single, universal, organising principle in terms of which alone all that they are and say has significance…”

Foxes, in contrast, “…  lead lives, perform acts and entertain ideas that are centrifugal rather than centripetal; their thought is scattered or diffused, moving on many levels, seizing upon the essence of a vast variety of experiences and objects for what they are in themselves, without, consciously or unconsciously, seeking to fit them into, or exclude them from, any one unchanging, all embracing, sometimes self-contradictory and incomplete, at times fanatical, unitary inner vision.”

Hedgehogs commit to a single framework for how the world works while foxes dance across many models.

It’s tempting to place thinkers in one profile or the other. Berlin says that Plato, Fyodor Dostoyevsky, and Friedrich Nietzsche are hedgehogs while Shakespeare, Aristotle, and James Joyce are foxes.

It’s also interesting to consider which cognitive profile is better for different purposes. Philip E. Tetlock’s research indicates that foxes are better than hedgehogs at predicting geopolitical events (see my last post). Jim Collins says that hedgehogs are better leaders. “They know how to simplify a complex world into a single, organizing idea—the kind of basic principle that unifies, organizes, and guides all decisions.”

A Collaboration Model

While many us fit one profile better than the other, there’s the intriguing possibility that we, individually or in teams, could metamorphize between fox and hedgehog mentalities with strategic intent. The most successful innovators, it seems, can nimbly toggle between these two modes within a single problem space.

With this diagram, I’ve depicted what a fox-hedgehog collaboration process might look like.


Each segment of the loop corresponds to a component of the cognitive cycle described by American pragmatist philosopher Charles Sanders Peirce as summarized by computer scientist John F. Sowa. (Thanks to Joseph Kelly for the tip!) Let’s walk through the loop.

  1. We form a theory for how the world works.
  2. We deduce how to apply the theory to a particular context.
  3. We predict the results of a planned action.
  4. We perform the action.
  5. Through induction, we learn from the resulting data and experience.
  6. We accumulate new knowledge.
  7. Given our new understanding, we use abduction to make the leap to a new theory based on a hunch or a guess.

In the Hedgehog-Fox Collaboration Loop, each animal owns half of the loop with two handoff points.

Theory, given its reductive and abstract nature, is clearly hedgehog territory. Theories rest upon Big Ideas that, in startup terms, exist in the form of visions or Thielian secrets. They roughly take the form, “By organizing around [insert Big Idea], we will [insert social mission].”

Many startup founders don’t think of their Big Ideas as “theories.” Their visions, to them, feel more like inevitabilities than hypotheses to be validated. However, reframing a hedgehog’s vision as a “theory” is a key step towards creating an environment where hedgehogs and foxes can collaborate. Theories are falsifiable. While leaders must be confident, they must have the humility to subject their brain children to testing.

Given a theory for how to enact world change, innovators must deduce the implications in a particular environment or market. Hedgehogs, masters of their theories, have a role to play in converting their ideas to actionable strategies. Foxes, however, are creatively advantaged in the planning process. By spotting cross-domain patterns, foxes can apply foreign techniques well-suited to the problem at hand. See this example of how an algorithm for interpreting telescope images was applied to breast cancer identification.

In evolved hedgehog-fox relationships, prediction is a point in the cognitive cycle where the fox can say “no.” As I mentioned above, Tetlock’s research shows that Foxes are better forecasters than Hedgehogs. Foxes can predict that a proposed action is so unlikely to succeed that it should not even be attempted, like a general refusing to accept the king’s order to attack at the wrong time. Foxes should decide which endeavors are mostly likely to yield the greatest benefit at the lowest cost.

Assuming acceptance of the Hedgehog-defined mission, foxes are best equipped to cleverly implement the plans. When reality hits, optimal action requires situational awareness and agility. Foxes can adjust tactics based on relatively unfiltered, malleable views of their changing environments.

The value of each action we perform is multiplied if we can learn from the results. Hedgehogs have a tendency to over-filter empirical evidence. Data that doesn’t fit the hedgehog’s model is ignored. Thus, we need foxes to own the inductive reasoning process. Foxes will pluck out interesting insights regardless of how conveniently they fit within the active paradigm.

Induction provides the ingredients for, as Sowa puts it, the “knowledge soup” in the cognitive cycle. The multi-model-minded foxes craft a knowledge soup that is diverse in flavors and textures. When the hedgehog re-enters the picture at the knowledge segment of the loop, they will encounter unfamiliar elements. The fox keeps the hedgehog’s thinking fresh.

In his discussion of hybridizing fox and hedgehog thinking, Venkatesh Rao writes that “The trick is to listen to everything, but also say no to almost everything…  But once in a while, the fox half decides to pass something along to the hedgehog half, and the latter trusts the intel enough to change course.”

We saw earlier that foxes should say “no” to implementing some plans proposed by the hedgehog. Conversely, the hedgehog should say “no” to reacting to much of the new information supplied by the fox.

Yet, sometimes a foxy insight will penetrate the hedgehog’s stingy filter, exposing a flaw in the hedgehog’s model. In the collaboration loop, hedgehogs use abductive reasoning to form and revise their theories to accommodate the world illuminated by the fox, leaping to each new iteration based on a hunch or guess (see Joseph Kelly’s discussion of abductive reasoning and startups). A hedgehog’s passion for its Big Idea is a driving force of abduction. A singular focus propels the hedgehog to make necessary adjustments without losing the gravitational weight of the original vision.

Individuals and Teams

The hedgehog-fox collaboration loop can be applied to your own thought process or to a team workflow.

Venkatesh Rao invented the term “foxhog” to describe individuals who can combine fox and hedgehog thinking. A foxhog, Rao writes, is “A miraculous beast that turns uncertainty into certainty.”

One of my favorite examples of foxhog innovation is Gutenberg’s invention of the printing press, according to Arthur Koestler’s account in The Act of Creation. Gutenberg, in hedgehog fashion, fixated on a singular goal to print copies of the bible at scale. In fox fashion, he temporarily let go of his mission to enjoy the wine harvest where he stumbled upon the wine press. Gutenberg’s eureka moment occurred after his fox half told his hedgehog half about a mechanic of the winepress that could be applied to reliably print with movable type. I made a maze to tell this story.

I consider myself to be a natural hedgehog who is learning to be foxier over time.

My default mode is to survey the world with an eye towards exploiting new information for my goals. As a product manager, the products I work on become organizing principles for much of my cognition. Rao says that “hedgehogs have lots of books 5% read (judged by their cover), and a few 300% read (repeatedly and closely re-read).” This is me.

But I feel liberated to know that the best way to achieve my hedgehog dreams is, on occasion, to forget about them and enjoy the world as a fox, without an agenda. In doing so I can improve my judgment and eat yummier knowledge soup.

I also try to partner with foxes. Assuming we can’t all be perfect fox-hedgehog hybrids, we can at least aspire to apply both cognitive styles in aggregate on teams. To be productive, hedgehogs should recognize they need foxes and vice versa.

The Superforecasting Playbook for Product Development

When you inject new software into an environment, it’s difficult to forecast what will transpire.

Product teams continually try to predict what will work. We predict that a certain new feature will increase conversion, that a targeted ad spend will bring critical mass, or that market conditions will change in our favor. We expect many of our bets to fail.

We inoculate ourselves from bad predictions with methodologies like Agile and Lean Startup. Measuring market signal with each iteration helps us quickly change course. If the brilliant new feature decreases conversion, we can remove it before our revenue plummets.

Minimizing losses from bad bets give us more opportunities to try things. However, the experience of failed attempts does not reliably lead to better predictions in the future. Faced with new information, we can overreact, under-react, misunderstand, forget, or turn a blind eye. Emotions and cognitive biases can cloud our judgment.

Learning can go even more haywire at an organizational level. Add group-think, knowledge silos, and corporate amnesia to the list of epistemological threats.

But imagine if we could systematically improve the accuracy of our company’s predictions. Then we could reliably increase our ability to make winning bets over time.

I was surprised to discover that a playbook for optimizing prediction accuracy already exists. It’s called Superforecasting. But it’s not being widely leveraged by product teams yet.


The first step to improving something is measuring it. In the book Superforecasting, Philip E. Tetlock and Dan Gardner describe a system for measuring the accuracy of predictions called the Brier score. In their words, “Brier scores measure the distance between what you forecast and what actually happened.”

To explain Brier scores, let’s consider weather forecasting.

Weather forecasting involves making probabilistic predictions like “There’s a 70% chance it will rain tomorrow.” If it doesn’t rain tomorrow, it doesn’t mean the forecast was wrong since it gave a 30% chance of no rain. This can be counter-intuitive since people have a false tendency to think that a forecast is wrong when a prediction that something is likely doesn’t end up happening.

In the terminology of the Brier score, a forecaster’s predictions are “well calibrated” if the things they say are X% likely to happen actually happen X% of the time. So if a meteorologist says “there’s 70% chance of rain tomorrow” one hundred times during the year, they would be perfectly calibrated if it rained 70 times and didn’t rain the other 30. If it had only rained 50 times, we would say the forecaster is over-confident. If it had rained 80 times, they’re under-confident. Over-confidence and under-confidence are the two forms of mis-calibration.



Perfect calibration

The other dimension of the Brier score is resolution. Resolution measures the decisiveness of a forecaster’s predictions. If a meteorologist says the chance of rain is 100% when it does rain and 0% when it doesn’t, they would have perfect resolution.

Forecasters can be perfectly calibrated with poor resolution. Let’s say a forecaster predicts a 50% chance of rain every day. Even if it does rain exactly half of the days, their resolution will be dinged for not elevating the chances on the days it does rain and lowering them on the days when it doesn’t. Poor resolution implies overly cautious predictions.

The Brier score combines calibration and resolution into a single accuracy metric. The lower the Brier score, the more accurate a forecaster’s predictions.

With a metric for prediction accuracy, we can scientifically investigate which factors lead individuals or teams to improve their foresight. Tetlock did exactly this to win a forecasting tournament administered by the United States Intelligence Community.

The forecasting tournament was created in the wake of the tragic false assessment that Iraq possessed WMDs. According to the IARPA website, the goal of the government program is to

… dramatically enhance the accuracy, precision, and timeliness of intelligence forecasts for a broad range of event types, through the development of advanced techniques that elicit, weight, and combine the judgments of many intelligence analysts.

The tournament, held over four years, asked participants to forecast questions like “Will Russia officially annex additional Ukrainian territory in the next three months?” and “In the next year, will any country withdraw from the eurozone?”

As measured by the Brier score, the predictions made by Tetlock’s group of “superforecasters” were 30% more accurate than the average for intelligence community analysts who has access to classified information.

While the highest performing forecasters are smart, Tetlock concluded that making consistently accurate predictions has more to do with how they think. Tetlock’s discoveries constitute a set of proven principles of good judgment, summarized here:

In philosophic outlook, [superforecasters] tend to be:

CAUTIOUS: Nothing is certain

HUMBLE: Reality is infinitely complex

NONDETERMINISTIC: What happens is not meant to be and does not have to happen

In their abilities and thinking styles, they tend to be:

ACTIVELY OPEN-MINDED: Beliefs are hypotheses to be tested, not treasures to be protected

INTELLIGENT AND KNOWLEDGEABLE, WITH A “NEED FOR COGNITION”: Intellectually curious, enjoy puzzles and mental challenges

REFLECTIVE: Introspective and self-critical

NUMERATE: Comfortable with numbers

In their methods of forecasting they tend to be:

PRAGMATIC: Not wedded to any idea or agenda

ANALYTICAL: Capable of stepping back from the tip-of-your-nose perspective and considering other views

DRAGONFLY-EYED: Value diverse views and synthesize them into their own

PROBABILISTIC: Judge using many grades of maybe

THOUGHTFUL UPDATERS: When facts change, they change their minds

GOOD INTUITIVE PSYCHOLOGISTS: Aware of the value of checking thinking for cognitive and emotional biases

In their work ethic, they tend to have:

A GROWTH MINDSET: Believe it’s possible to get better

GRIT: Determined to keep at it however long it takes

I love how Tetlock’s research puts data to philosophically rich concepts, like the hedgehog-fox dichotomy. The Ancient Greek poet Archilochus famously said that “a fox knows many things, but a hedgehog one important thing.” Tetlock categorized his forecasters into each persona.

Hedgehogs are fixated on one Big Idea; a single world view. They see everything through that lens. Tetlock describes this group in his forecasters pool:

Some were environmental doomsters (“ We’re running out of everything”); others were cornucopian boomsters (“ We can find cost-effective substitutes for everything”). Some were socialists (who favored state control of the commanding heights of the economy); others were free-market fundamentalists (who wanted to minimize regulation). As ideologically diverse as they were, they were united by the fact that their thinking was so ideological. They sought to squeeze complex problems into the preferred cause-effect templates and treated what did not fit as irrelevant distractions. Allergic to wishy-washy answers, they kept pushing their analyses to the limit (and then some), using terms like “furthermore” and “moreover” while piling up reasons why they were right and others wrong. As a result, they were unusually confident and likelier to declare things “impossible” or “certain.”

Foxes, in contrast, are open to many perspectives. They synthesize inputs from multiple models. Tetlock writes:

The [fox] group consisted of more pragmatic experts who drew on many analytical tools, with the choice of tool hinging on the particular problem they faced. These experts gathered as much information from as many sources as they could. When thinking, they often shifted mental gears, sprinkling their speech with transition markers such as “however,” “but,” “although,” and “on the other hand.” They talked about possibilities and probabilities, not certainties. And while no one likes to say “I was wrong,” these experts more readily admitted it and changed their minds.

The hedgehog vs. fox battle has been officially settled: foxes win, at least as far as forecasting goes. Tetlock found that “Foxes beat hedgehogs on both calibration and resolution. Foxes had real foresight. Hedgehogs didn’t.” Imagine an economist who relentlessly preaches austerity in the face of clear evidence showing the failure of austerity measures.

Yet, Tetlock doesn’t close the door on the utility of the hedgehog mindset. He raises the possibility that, while Hedgehogs suck at forecasting, their commitment to a single frame of thinking might lead them to ask more illuminating questions. Tetlock speculates:

The psychological recipe for the ideal superforecaster may prove to be quite different from that for the ideal superquestioner, as superb question generation often seems to accompany a hedgehog-like incisiveness and confidence that one has a Big Idea grasp of the deep drivers of an event. That’s quite a different mindset from the foxy eclecticism and sensitivity to uncertainty that characterizes superb forecasting.

This is a compelling hypothesis for how foxes and hedgehogs can collaborate to achieve great things.

Applying Superforecasting to Startups

Now that we’ve taken a quick tour of superforecasting, let’s consider how the system could be applied to product development.

Imagine your team at an eCommerce startup is finishing a project to redesign the checkout page. Right before you launch an A/B test with the new page, you pose the question:

“Will the purchase conversion rate of the new checkout page beat the current page by 10% for the 30 days after launch?”

Everyone on your team provides probabilistic forecasts ranging from 10% to 80%. The average prediction gives a 30% chance that the new page will increase conversion by 10%.

If you had seen these forecasts before you committed to doing the project, would you have moved forward with it?

It depends. Given the scale of your company, maybe a 30% chance to increase revenue by 10% is a good bet to make. But maybe other endeavors are more likely to succeed, so you start forecasting all proposed projects before you prioritize them.

And your forecasts need not be limited to the impact of potential projects. You can predict other factors related to the inner workings of your company, the technological landscape, or the market. Some examples:

“Will the engineering team finish refactoring our accounting system within two months of the project start date?”

“If we added 5 employees to the sales team in March, will our customer base grow by 20% by the end of the year?”

“Will the augmented/virtual reality market exceed $100 billion in revenue by 2020?”

“Will the word recognition accuracy rate of Google Voice reach 99% by the end of 2019?”

“Will the average health insurance deductible increase by 5% by the end of 2018?”

With accurate forecasts to questions like these, we’ll be more aware of our capabilities and steps ahead of the market. We’d have the foresight necessary for exceptional strategy.

One might argue that companies are already making forecasts in the form of cost-benefit analysis or financial models. But these formats are too ambiguous to help us improve our predictions over time. Tetlock and Gardner write:

The first step in learning what works in forecasting, and what doesn’t, is to judge forecasts, and to do that we can’t make assumptions about what the forecast means. We have to know. There can’t be any ambiguity about whether a forecast is accurate or not…

In cost-benefit analysis, you might forecast the impact of a project to be high. But what exactly does “high” mean? And in what time frame must the benefit occur? There are too many shades of gray. And maybe we’re two months after launch with little impact from the project. Does that mean the prediction is wrong or do we give it more time?

With a financial model, the projections will surely not be exactly right. So how can we determine if the model was correct? How close do the numbers need to be?

Cost-benefit analysis and financial models serve a real purpose, but they must be converted into unambiguous forecasts if we’re serious about improving our predictions over time.

So let’s say our company starts making a high quantity of unambiguous predictions. If we calculated our collective Brier score, we’d probably see that it leaves much to be desired.

Besides the fact that accurate forecasting is hard, I suspect many startup people are especially poor forecasters. There’s a hedgehoggy aspect of working on a product. It requires an almost irrational devotion to the product direction; a narrow lens that misses relevant opposing perspectives. We may be prone to overconfidence in predicting the success of product launches or improvement of their environmental conditions. Exuberant optimism is conducive to good work but runs counter to accurate forecasting.

Perhaps company staff is too biased to proficiently forecast their own future. The solution could be to insert 3rd party superforecasters into a company’s operations. It appears that Tetlock co-founded a commercial firm based on this idea. I’m compelled to try this out. In the future, asking questions of superforecasters could become ubiquitous, like running user tests or customer surveys. I anticipate there’d be some challenges in transferring domain knowledge, at least as far as some key questions are concerned. But this can be overcome.

But even without enlisting external superforecasters, applying the model is useful. These practices will be beneficial no matter what:

  • Creating unambiguous hypotheses for projects
  • Making predictions before you commit to resourcing
  • Looping back to old predictions to judge their accuracy

Turning these practices into habits is the goal of my side-project, Double-Loop.

A good way to summarize the potential of the superforecasting model for product teams is that it nudges us to become more like foxhogs, a hybrid animal invented by Venkatesh Rao. He explains:

The net effect of this personality profile is that the foxhog can live in a highly uncertain environment, and be responsively engaged within it, yet create a high-certainty execution context.
This ability to turn uncertainty into certainty is the key to … entrepreneurial effectiveness…

Accurate predictions are not a sufficient for success. You need to ask the right questions. This requires leaving no stone unturned while you push forward on a singular mission to achieve your product vision.

Really Bad Chess: A New Way to Interact With AI

Some of the most interesting inventions spark new ways for people and machines to interact. One of the best examples I’ve seen of this, lately, is the iOS game Really Bad Chess

Conventional chess programs come with adjustable difficulty meters controlling the prowess of your computer opponent. At some difficulty levels, you can easily win. But when you ramp up the computer’s smartness to a certain level, you’re consistently defeated.

For a game to be pleasurable, there needs to be the right amount of tension. When facing a computer opponent, you can play a competitive game by setting its ability to a level similar to your own. But this feels like fake tension, like playing against someone you know isn’t trying hard. A win doesn’t feel like a true victory.

For me, the superiority of computers over humans in chess has a deflating effect that even makes it less fun to play against other humans. What’s the point of getting better at chess when the involved mental processes would be better delegated to a computer program, like long division or spell checking?

Really Bad Chess alters the stale dynamic between people and chess AI through a novel handicapping mechanism. In the app, the AI never gets smarter. Instead, the variable is the relative strength of your starting pieces. This subtle change makes a transformative difference in the experience of playing chess against a computer.

When you play your first game, your randomly generated starting pieces are way stronger than the AI’s, like this.


Here, your job is to not blow your advantage, which itself provokes an interesting mindset. With each win against the computer, it’s starting pieces get stronger, until you’re staring at a disadvantage, like this.

screen696x696 (1).jpeg

Having played a lot of chess against computers, my demise felt inevitable when I first played from behind against the AI. How could I beat the sharp precision of a computer program when it simply didn’t have to mess up to win? But then I realized that the AI in Really Bad Chess is prone to egregious blunders.

Here we have the paradigm shift! Really Bad Chess jostled me out of the habit of comparing my intelligence to the machine. In this case, it’s no contest: I’m smarter than the machine. So instead of feeling like I need to match the calculating ability of a computer, Really Bad Chess motivates me to use my creativity to win as an underdog. The challenge is not to play perfect chess: it’s to exploit mistakes, like a hacker finding vulnerabilities in a system.

But how much smarter than the machine am I? It’s unclear. Part of the beauty of Really Bad Chess is that there’s no clear wall. The more you win, the more severe your disadvantage becomes. But you could always discover new ways to leverage the AI’s weaknesses, making the impossible possible. Assuming the fallibility of one’s opponent leads to a growth mindset where there’s always hope of winning, regardless of the apparent odds.

Winning as a big underdog requires breaking with the orthodoxy of “good chess.” When playing against a conventional opponent, human or machine, one avoids making moves that lead to a clear negative consequence. E.g., you don’t knowingly move a piece to a square where it can be captured, unless it’s an intentional sacrifice for a larger strategic gain.

When facing a large disadvantage, however, your only shot at winning is when your opponent misses obvious opportunities. If you limit yourself to traditionally good moves, your range of options will be too limited to mount a comeback.  In this sense, Really Bad Chess demands “bad”  moves.  There’s risk of getting punished, but this added freedom is what you need to tip the scale.

I love Really Bad Chess because it cultivates a mindset of innovation. Breakthroughs happen when you recognize that the standard playbook for a situation is not optimal for the actual world we’re in. Really Bad Chess takes you through an evolution of shedding the conventional chess playbook to make large adaptions to a subtly different reality.

Thanks to Tom Schmidt for sharing Really Bad Chess with me.


The Amazing Trisociation Cube: A Visual Model of Creation

Product managers don’t create things the way people often think of creating. We usually aren’t writing the code or crafting pixel-perfect designs. Instead, our role is to synthesize diverse inputs from customers and contributors into a singular direction; a narrative that guides the team who hands-on builds the product.

On the surface, synthesis feels different from creation. Creation suggests introducing something new into the world while synthesis implies blending existing elements. However, Arthur Koestler explains in The Art of Creation that synthesis is not only a key part of the creative process but near to its essence. He uses the term “bisociation” to describe the fusion of two otherwise disparate matrices of thought. Koestler demonstrates that the greatest creative breakthroughs spanning science, art, and humor can be understood in terms of bisociation. Gutenberg’s invention of the printing press bisociated the techniques of the wine-press and the seal. Kepler’s laws of planetary motion bisociated the previously unconnected fields of astronomy and physics. In my post, When the Tension Snaps in Silicon Valley, I used Koestler’s concept of bisociation to explain the hilarity of HBO’s Silicon Valley show.

In reading The Art of Creation, I’ve been struck by Koestler’s illustrations of the creative process, such as his representation of humor.Screen Shot 2016-08-27 at 4.52.31 PM

The underlying pattern, Koestler writes, “… is the perceiving of a situation or idea, L, in two self-consistent but habitually incompatible frames of reference, M1 and M2. The event L, in which the two intersect, is made to vibrate simultaneously on two different wavelengths, as it were. While this unusual situation lasts, L is not merely linked to one associative context, but bisociated with two.”

A simple example is a pun, where a single word takes on simultaneous meaning in two associative contexts. One can imagine similar diagrams representing other forms of discovery. In the above illustration, M1 could be physics, M2 could be astronomy, and L could be Kepler’s law of planetary motion.

Within each discourse, technique, or scientific paradigm, some patterns of behavior are intelligible while others are not. One can’t readily apply principles of military strategy when decorating a house for a party. Koestler uses chess as a metaphor to explain the dynamics of a thought matrix. Each matrix comes with a set of permissible moves, like the moves a rook or bishop can make on a chess board.


Another way of representing the permissible moves in a domain could be a maze, like the one I’ve drawn below. The walls of the maze dictate where you can wander and where you can’t. The red path illustrates one possible train of thought through the matrix.


Koestler’s visualizations of the creative process illustrate the fusing of two matrices of thought. Product managers, as I’ve explored starting with The Product Management Triangle, must synthesize input from three domains: business, users, and technology. Thus, I was inspired to visualize a model of trisociation, using mazes as metaphors for the constructs of each domain.

I created three maze forms; one for business, one for users, and one for technology. Each form represents a distinct way of thinking. The business dimension of product development requires an understanding of economics and the motivations of investors. Building a product that users love requires a mastery of behavioral psychology and design.  Engineering the product requires a grasp of physical possibility and deep knowledge of the chosen technology. Each one of these areas is independently deep, worthy of graduate-level study. Each one has its own language, history, and perceived dead-ends.


Building a product requires trisociating a business model, market (user group), and technology. Unfortunately, it’s impossible to be certain that a certain blend will work together. The options are endless. It requires an abductive leap to choose the elements from each matrix for an attempted synthesis. The below illustration represents a possible area of focus within each domain; a business model, target market, and a technology.


In this visual model, the trisociation stage is set by placing the chosen matrix regions in the shape of a three-dimensional cube, like this:



Let the attempted synthesis begin! As Marty Cagan says, the goal of product is to create something that’s “valuable, usable and feasible.” As entrepreneurs know, achieving just two of these conditions is hard. Attaining all three at once is magical.

The below maze cube represents the success state — a connected loop touching all three matrices; tech, users, and business.



Reaching the success state requires many things to go right.

For one, you need to choose matrix regions that can be trisociated. Some attempted syntheses fail to generate durable progress, like the fusion of astronomy and religion.

Let’s say we picked a different region from the users matrix. Maybe instead of focusing our app on the pet owners market, we focus on clowns.


We could end up in a formation where it’s impossible to create a connected loop touching all facades of the trisociation cube. We’re blocked.


You can think of a pivot as swapping in one matrix region for another. Flickr swapped in photo sharing in place of gaming. Groupon swapped in shopping for political movements. With many of the same pieces, they moved from blocked to trisociated.

But even when you have compatible matrices fused together in your discovery process, it’s not guaranteed that you’ll find the path to synthesizing them. Freud took the right abductive leap to connect cocaine with medicine, but he missed the use case as a topical anesthetic.

As Koestler conveys in The Art of Creation, the person who originally juxtaposes two matrices is not always the one who makes the breakthrough connection. Just as Karl Koller, not Freud, is credited with the discovery of cocaine as a local anesthetic, Darwin wasn’t first to bisociate biology and the struggle for existence. But Darwin was the one who most rigorously established the connection and demonstrated its validity.

This last diagram shows a combination of matrices that can be connected with a loop, but the traveler is yet to discover the path. The explorer, with their own biases and tunnel vision, can’t know for sure that there is a solution. Tragically, they may give up, or pivot, before finding it.



When Tension Snaps in Silicon Valley

I’ve heard my startup friends say they can’t watch HBO’s Silicon Valley because it’s “too true.” I feel the opposite. While it’s disconcerting to realize that startup people (like me) deserve to be mocked for certain behaviors, I love the show, in part, because it helps me understand the environment I’m immersed in. This sequence, in particular, has stuck with me.


I hadn’t put much thought into why I was so amused until I started reading Arthur Koestler’s 1964 book, The Act of Creation.” Koestler argues that comedy involves “bisociation,” the connecting of two seemingly incompatible frames of reference. Here’s how he illustrates the type of humor that evokes sudden laughter:

Screen Shot 2016-08-27 at 4.52.31 PM

The underlying pattern, Koestler writes, “… is the perceiving of a situation or idea, L, in two self-consistent but habitually incompatible frames of reference, M1 and M2. The event L, in which the two intersect, is made to vibrate simultaneously on two different wavelengths, as it were. While this unusual situation lasts, L is not merely linked to one associative context, but bisociated with two.”

In the case of the Silicon Valley sequence I posted above, the humor is in the ridiculousness of bisociating obscure technical innovations with notions of improving the world; e.g., “Making the world a better place… through paxos algorithms for consensus protocols.” What it means to “make the world a better place” is subjective, but the implication of world-level change is that it’s large and foundational. Many startup pitches, in contrast, sound trivial and niche to outsiders. Silicon Valley‘s mockery concisely illuminates an element of grandiose bullshit imbuing many startups: a poorly constructed facade of higher purpose.

Koestler writes:

The creative act of the humorist consisted in bringing about a momentary fusion between two habitually incompatible matrices. Scientific discovery … can be described in very similar terms–as the permanent fusion of matrices of through previously believed to be incompatible.

In short: “Comic discovery is a paradox stated–scientific discovery is paradox resolved…”

Many of the same paradoxes that Silicon Valley exposes through satire are the ones I’ve been wrestling with on this blog and in my career as a product manager. With my latest series of writing, starting with The Fundamental Tension in Product, I provide heuristics for staying committed to a world-changing vision while navigating the on-the-ground reality of real work and market signal. While the presenters in the TechCrunch Disrupt parody fall over flat in navigating the tension, some companies are able to pull it off.  Amazon has arguably “changed the world” through commoditizing cloud-based computing. “Cloud-based computing” might sound esoteric on the surface to a layman, but it’s had world-level impact by making the creation of websites significantly cheaper and easier.

The vision of many great tech companies can be expressed in the laughter-inducing syntax of the Silicon Valley pitches: “We’re going to make the world a better place, by [insert cryptic-sounding technical advancement].” Tesla’s vision has a different flavor with a similar structure. Elon Musk says he will save the world by creating cars for rich people. His master plan actually might hold up, but it could sound laughably delusional on the surface.

The paradox of a tech startup’s world-changing ambitions is just one of the interesting tensions illuminated by Silicon Valley. Another such tension is the one that exists between sales and engineering, illustrated by Pied Piper’s pivot to create The Box:


Starting with The Product Management Triangle, I’ve written in depth about the tensions that emerge around product between business, technology, and customers. These tensions can manifest as a clash between engineering and sales. Engineering generally wants to work on what’s unique, innovative, and artful. Sales wants the engineering team to build something that can be sold. Season 2 of Silicon Valley comically depicts an extreme resolution of the tension: simply letting the sales team dictate entirely what the engineering team builds. The result, of course, isn’t viable. The engineering team rebels. After the sales team gets fired, the pendulum swings to the other extreme. Pied Piper ships a product that was never tested on users other than engineers. The product was technologically brilliant but inaccessible to the general public.

I couldn’t think of a better stage-setting to explain the need for product management. It is precisely the job of the product manager to manage tensions like the one that led to The Box. The product manager must synthesize (or bisociate) seemingly conflicting inputs into a narrative that resonates across multiple planes of thought.

As much as I would like to escape the mockery of Silicon Valley, clearly I cannot. I’m hit dead on by Pied Piper’s temporary CEO with his “Conjoined Triangles of Success.”



Lean & Fat Product Thinking

In 2005, I took on my first product management job, developing data management tools for  CNET. Wisely, the company put me through agile training. While the agile philosophy resonated with me, I quickly fell into a waterfall trap in my first large project. The objective was to create a tool that would streamline the workflow between the editorial and data teams. Given complexities in how the two teams needed to work together, we designed an elaborate user interface to handle every last corner case. Stakeholders were happy with the design, so the developers started building it. Even though we worked in two-week sprints, the engineering effort spiraled out of control. Will to invest in the project dried up before we were finished implementing the planned design. We ultimately shipped a tool that cosmetically looked like our original design, but only portions of the user interface were functional.

While the tool was an embarrassment on many levels, it was not a complete failure. A feature that we built in the very beginning — the ability for the editorial team to publish a product review before a catalog record was processed by the data team — was a big efficiency gain over the previous state. If we had just shipped that one feature first and iterated on the design from there, we would have saved the company tons of money and delivered better results. Some of the corner cases we prepared for, in practice, never happened. The interface we handed over to the engineering team was insanely over-designed.

Fortunately, I only made that particular mistake once in my career. I started looking at all design assumptions with suspicion. After reading The Lean Startup, my conversion to skeptical product development was complete. I felt that every strategy needed to be rigorously tested and validated with users from the outset. Any small investment in an invalidated direction, in my mind, was fat that should be cut out of our operation. Lean was my religion.

When Steve Jobs died in 2011, I read Walter Isaacson’s biography. Jobs’ aggressive pushing of vision, against resistance from co-workers and the marketplace, seemed key to Apple’s unprecedented impact. To a lean extremist, the Steve Jobs phenomenon is pretty confusing. My bias was to be adaptive to marketplace signal, not to ignore it. As someone with my own dreams of changing the world, I admired Steve Jobs. Yet, I espoused a doctrine that made his success unintelligible. In retrospect, I’m amazed by how long I lived with this contradiction.

The first time I became consciously aware that lean product thinking had limits was from reading Product-Driven versus Customer-Driven by Venkatesh Rao in 2014. Rao pointed out that, with lean startup methodology, attempts to iterate at minimal cost towards market validation can take you away from your original vision, leading to what he describes as “visioning-process debt.” Rao writes: “While there is a vast repository of programmer knowledge about refactoring techniques, there is no comparable metis around pivoting and paying off visioning-process debt.”

Rao’s words about the limitations of lean startup felt profound to me, but I didn’t know what was tangibly on the other side. If lean startup wasn’t an optimal way to think about product, what was the optimal way?

Additionally, Rao convincingly claimed that “there’s an indispensable asshole at the top for product-driven” companies. Since iterating based on customer feedback is insufficient to achieve large impact, an asshole is necessary to get a whole company to go along with a vision that cannot be market tested along the way. Being a visionary product thinker is insufficient — you need to intimidate a group into following you, or so the argument goes.

This was troubling to me since, with a high autonomic response rate, I’m biologically wired to not be an asshole. Was I was facing a non-asshole ceiling in my ambitions as a product leader?

Time will tell whether I’m facing a non-asshole ceiling, but I’ve at least made progress grasping what’s on the other side of lean thinking. Rao’s recent piece offers a nice way of framing it: “fat thinking.” Of course! It’s only natural that fat thinking would follow lean thinking, similar to how philosophical movements zig zag between opposites, like structuralism leading to deconstruction.

Fat thinking, however, does not replace lean thinking. The optimal product crafting process must balance both modes, recognizing the correct context for each one.

Here’s how Rao characterizes fat thinking:

When you get away from lean regimes, you’re running fat: you’ve deployed more capital than is necessary for what you’re doing, and you’re deriving a return from it at a lower rate than the maximum possible.

So why would you ever deploy more capital than is necessary? Consider the decision to build a platform. A lean extremist would say that you shouldn’t build a platform until you’re scaling an app with proven demand. Otherwise, you risk creating a platform that no one will use. Why not first find out if people give a shit before doing the heavy lifting?

The fat thinker’s answer is that building a platform can enhance your ability to discover a killer app. If you start by only making a platformless prototype app, if it fails user validation, only minor iterations come easy. There’s no way to quickly manifest the essence of the original vision in widely varying forms.

Platform fat can make you leaner in attacking specific opportunities. I saw this at my previous company. We had a platform for rating the sustainability of products. While we started with a consumer focus, we later discovered demand for an enterprise app. The platform made validating this direction cheap. If the original consumer app was only a prototype, we would have had to almost start from scratch testing the enterprise app. Instead, it only required bolting on a new UI. As it turned out, we ultimately had to rebuild the platform, but we did so with a paying partner.

Building a platform before you’ve validated a killer app requires faith in a vision. In his 1896 paper The Will to Believe, philosopher William James explains why this type of untested belief is not in conflict with a scientific sensibility. James argues that there are cases, like religion, where you need to believe a hypothesis to find out if it’s true. Building a platform-first startup is such a case. You’re not going to build an unvalidated platform if you’re skeptical of its value.

This gets us to another difference between lean and fat thinking. In lean startup methodology, we’re predominantly skeptical of all hypotheses. A direction must be proven before it’s adopted. In fat thinking, we cannot be skeptical. We must believe that we are on the right course. Skepticism will cause failure. (My last post explains this point in more detail.)

In one sense, the distinction between lean and fat thinking maps to Karl Popper’s separation of science and pseudoscience. While it’s been debated by others, Popper argues that the defining characteristic of science is that its hypotheses must be falsifiable. Falsifiable hypotheses are key to lean startup methodology. If an experiment cannot result in a failure, what can you learn from running it? A pseudoscience, in contrast, is based on confirmation. Astrology has traction because so many happenings can be interpreted to confirm its claims. The visions that motivate fat thinking share pseudoscience’s potential for wide interpretation, but towards positive ends. A powerful product vision attracts believers. Google’s mission “to organize the world’s information” or Facebook’s mission “to make the world more open and connected” bring people under the tent in an astrology-like manner.

However, the believers attracted by companies like Apple, Facebook, and Google must also be scientists. Lean and fat thinking must be combined under the same roof. Elon Musk’s master plan for Tesla is a perfect illustration. Musk believes that his company can help prevent the collapse of civilization by driving sustainable changes through the economy (requires fat thinking). Doing so requires using limited resources to win in specific markets that exist now, starting in the high-end car market  (lean thinking).

Here’s my summary heuristic for how to combine lean and fat thinking:

  1. Use lean thinking to adapt your creations to the world as it is now.
  2. Use fat thinking to conceptualize how you will change the world or anticipate how the world will change.

Lean thinking is how you win markets, tap into behavioral patterns, adapt to the economy, and create a killer app with graphs that go up and to the right. Fat thinking is how you create markets, change human behavior, transform the economy, and build a platform that births a multitude of killer apps.

While much systematic thought has been published on lean manufacturing and thinking, there’s less out there on fat thinking. Today, fat thinking is primarily visible from (A) charismatic “assholes” who easily attract followers to their vision or (B) creationist-style thinkers who don’t believe that science has a role in product design.

To make fat thinking a fruitful enterprise for scientifically-minded non-assholes, we need to create new systems. Rao’s “leak before failure” notion is the beginning of one such system. Warldley value chain maps are another example. Both ideas provide justifications for adding fat to your creations in anticipation of world change.

To conclude, I’ve reposted the diagram from my post, The Fundamental Tension in Product.  The inner loops describe how to stay oriented in your current environment. The goal here is to stay lean. Quick cycles translate to market advantage. Your fat should come from the outer loops, your vision for how the world should or will be.



Dan Schmidt's thoughts on building products.