Introspective product builders can find themselves trapped in a paradox between (A) the product-driven Steve Jobs mentality of bending the world to your vision, and (B) the customer-driven lean startup mindset of adapting our products to the environment and the marketplace.
The product-driven mindset echoes an anti-realist branch of epistemology where the world is a construction of our minds. Anti-realists emphasize how reality is malleable, something that can be transformed by language and concepts.
The customer-driven approach, in contrast, echoes a realist philosophy. Epistemic realists believe that there is an absolute world that exists independently of us. It is the job of philosophers and scientists, they say, to conform our mental models to the external reality.
It’s easy to say that the product-driven / customer-driven paradox is a yin-yang that requires a balancing act as opposed to a resolution. However, as someone who perpetually yearns for maximum strategic clarity, I find “balance” to be an unsatisfying answer. I’d rather find a way around the ambiguity.
In The Democracy of Objects, philosopher Levi Bryant argues that the world views of both epistemological realists and anti-realists rest upon their bifurcation of subject and object. Bryant writes:
… within the Modernist schema that drives both epistemological realism and epistemological anti-realism, the world is bifurcated into two distinct domains: culture and nature. The domain of the subject and culture is treated as the world of freedom, meaning, signs, representations, language, power, and so on. The domain of nature is treated as being composed of matter governed by mechanistic causality. Implicit within forms of theory and philosophy that work with this bifurcated model is the axiom that the two worlds are to be kept entirely separate, such that there is to be no inmixing of their distinct properties.
Both realist and anti-realist mental models anthropomorphize nature such that objects can only be understood in human terms. Their debate is only about whether or not human representations correlate to an absolute reality. Anti-realists hold that there is no such thing beyond our constructions, while realists aspire to accurately map an independently real external environment. Bryant distills the contention as the X in the following diagram.
… this mode of distinction leads us to ignore the role of the nonhuman and asignifying in the form of technologies, weather patterns, resources, diseases, animals, natural disasters, the presence or absence of roads, the availability of water, animals, microbes, the presence or absence of electricity and high speed internet connections, modes of transportation, and so on. All of these things and many more besides play a crucial role in bringing humans together in particular ways…
Put in this light, both the realist and anti-realist philosophies seem ill-equipped to grapple with the realms of tech products where people and technologies are tangled together.
Alternatively, Bryant argues for an object-oriented ontology where human and non-human objects, first and foremost are objects. Wikipedia defines this philosophy as a “… school of thought that rejects the privileging of human existence over the existence of nonhuman objects” The door then opens to examining, not just how machines can mimic human systems, but how all forms of human and non-human objects interact with each other. Bryant writes:
… where the anti-realists have obsessively focused on a single gap between humans and objects, endlessly revolving around the manner in which objects are inaccessible to representation, object-oriented philosophy allows us to pluralize this gap, treating it not as a unique or privileged peculiarity of humans, but as true of all relations between objects whether they involve humans or not. In short, the difference between humans and other objects is not a difference in kind, but a difference in degree. Put differently, all objects translate one another. Translation is not unique to how the mind relates to the world. And as a consequence of this, no object has direct access to any other object.
While I’ve barely touched the surface, object-oriented ontology feels like a solid foundation for product thinking. The extreme forms of both the product-driven and customer-driven mindsets give primacy to the single problem of how a company integrates with its external environment. It’s liberating to let go of this bifurcation and delve into the multiplicity of systems that require integration. Product builders must integrate teams, technologies, customers, and business models who are all simultaneously interacting with society, culture, nature, and the economy.
[People] make their own history, but they do not make it just as they please; they do not make it under circumstances chosen by themselves, but under circumstances directly found, given and transmitted from the past.
– Karl Marx (quoted by Levi Bryant)
Cognitive biologists Maturana and Valera argue that the defining quality of living things is that they are “autopoietic.” Autopoiesis literally means self-creation. Living things, Maturana and Valera explain, are able to regenerate their own parts, like how the cells of an animal die and regenerate or how a species reproduces.
Autopoiesis is an intriguing lens through which to understand the aspirations of tech companies.
To examine what it means for a tech company to be autopoietic, let’s consider two systems on which it depends: its product and its customers. An autopoietic company maintains a circular loop where the team builds the product, customers use it, and the team is paid to keep iterating. This is otherwise described as product-market fit.
Startups are not born autopoietic — they are bootstrapped by investment and the passion of their founders. If these elements dry up before product-market fit is achieved, the team stops building and the company ceases.
Startups who achieve autonomous life remain vulnerable like organisms. The departure of a leader, a shift in the technical landscape, or the collapse of a market can lead to a company’s demise.
The most ambitious founders want more than a brief period of success: they envision autopoietic institutions that will thrive after their departure, riding waves of technological and social change.
To achieve longevity, companies must behave more like species than organisms. Traits for survival must evolve across generations in the midst of environmental flux. Great companies maintain their circular loop of creation as employees are fired and hired, as customers are acquired and lost, and as technology is refactored. They preserve life and identity while their elements radically change.
Companies are often advised to define their “DNA.” Genetics is an apt metaphor for the passing of values to new hires. However, genetics is not the only mechanism for passing down survival advantages. In the Democracy of Objects, philosopher Levi Bryant writes:
… it’s worth recalling that Darwin nowhere specifies what the mechanism of inheritance is, only that in order for natural selection to take place there must be inheritance. There is thus no reason to suppose that genes alone are the sole mechanism of inheritance.
Organisms alter their environments to enhance their future prospects. There is a class of animals known as niche constructors who change their environment to suit their purposes. Beavers build dams and birds assemble nests.
For some species, survival is predicated on the non-genetic creations of their predecessors. Bryant quotes developmental systems theorists, Griffiths and Gray:
Certain aphid species reliably pass on their endosymbiotic Buchnera bacteria from the maternal symbiont mass to either eggs or developing embryo. The bacteria enable their aphid hosts to utilize what would otherwise be nutritionally unsuitable host plants. Aphids that have been treated with antibiotics to eliminate the bacteria are stunted in growth, reproductively sterile, and die prematurely.
The point here is that the Buchnera bacteria is not a part of the aphid’s genome, but nonetheless plays a significant role in the development of the phenotype. Far from the genes already containing information in the form of a blueprint of what the organism will turn out to be, genes are one developmental causal factor among a variety of others.
The notion of non-genetic inheritance suggests that organizations can do more than pass down corporate values to new employees through training and culture. Companies can plant survival advantages in their environment; that is, within systems that exist independently of the team members themselves.
Here’s a simple example from my startup. We connected our post-purchase customer survey with our Slack communication system. Whenever a customer submits the survey, the Typeform app posts the response to a Slack channel where everyone sees it. Notifications are sent if the Net Promoter Score is beneath a certain threshold.
The result is that a baseline of real-time customer awareness is automatically provided for new employees. New team members don’t need to learn, “we value customer feedback” and then consciously seek it out. Instead, customer feedback is already there in the environment. Employees “inherit” an advantage out the gate that doesn’t require DNA transfer.
The circular loop of product creation is bi-directional. Not only must the team build the product, but the product must build the team. Long lasting products must be designed to perturb the team when something needs to be changed.
Constructing environmental survival advantages helps companies increase longevity through autopoiesis. However, a system’s achievement of autopoiesis isn’t necessarily good for the world at large. Maturana and Valera argue that the only goal of autopoietic systems is to maintain their circular loops of self-creation. Regardless of the mission and intentions of the founders, the most successful companies take on a life of their own, a life who’s overall impact on connected systems is unpredictable.
I’m struck by how startups state their intention to “change the world” or “dent the universe.” Changes and dents, of course, can have both positive and negative impacts. Autopoietic systems created climate change and fake news.
Note: Thanks to Jordan Peacock for nudging me down the autopoiesis road and pointing me to Levi Bryant.
The dichotomy of the fox and the hedgehog is fascinating, in part, because it gets at a paradox we face in trying to achieve our goals. On the one hand, we need to be monomaniacal hedgehogs to relentlessly execute a vision. On the other hand, we need the fox’s open-mindedness and sound judgment to fit our plans with an uncertain environment.
Hedgehogs and foxes are often pitted against each other in competition. In this post, however, I’m going to propose a model for how the two mentalities can work together, intertwined in the creative process. The ability to strategically mix the hedgehog and fox cognitive styles, either on a team or within your skull, is critical for innovation.
The Fox-Hedgehog Metaphor
The fox-hedgehog philosophy started with the words of Archilochus, an ancient greek poet.
“a fox knows many things, but a hedgehog one important thing”
In a fight, the cunning fox deploys a diversity of strategies while the hedgehog has one defense, curling up into a ball. Philosopher Isaiah Berlin unleashes the metaphoric power in his essay The Hedgehog and the Fox.
Berlin describes how hedgehogs “… relate everything to a single central vision, one system, less or more coherent or articulate, in terms of which they understand, think and feel – a single, universal, organising principle in terms of which alone all that they are and say has significance…”
Foxes, in contrast, “… lead lives, perform acts and entertain ideas that are centrifugal rather than centripetal; their thought is scattered or diffused, moving on many levels, seizing upon the essence of a vast variety of experiences and objects for what they are in themselves, without, consciously or unconsciously, seeking to fit them into, or exclude them from, any one unchanging, all embracing, sometimes self-contradictory and incomplete, at times fanatical, unitary inner vision.”
Hedgehogs commit to a single framework for how the world works while foxes dance across many models.
It’s tempting to place thinkers in one profile or the other. Berlin says that Plato, Fyodor Dostoyevsky, and Friedrich Nietzsche are hedgehogs while Shakespeare, Aristotle, and James Joyce are foxes.
It’s also interesting to consider which cognitive profile is better for different purposes. Philip E. Tetlock’s research indicates that foxes are better than hedgehogs at predicting geopolitical events (see my last post). Jim Collins says that hedgehogs are better leaders. “They know how to simplify a complex world into a single, organizing idea—the kind of basic principle that unifies, organizes, and guides all decisions.”
A Collaboration Model
While many us fit one profile better than the other, there’s the intriguing possibility that we, individually or in teams, could metamorphize between fox and hedgehog mentalities with strategic intent. The most successful innovators, it seems, can nimbly toggle between these two modes within a single problem space.
With this diagram, I’ve depicted what a fox-hedgehog collaboration process might look like.
Each segment of the loop corresponds to a component of the cognitive cycle described by American pragmatist philosopher Charles Sanders Peirce as summarized by computer scientist John F. Sowa. (Thanks to Joseph Kelly for the tip!) Let’s walk through the loop.
We form a theory for how the world works.
We deduce how to apply the theory to a particular context.
We predict the results of a planned action.
We perform the action.
Through induction, we learn from the resulting data and experience.
We accumulate new knowledge.
Given our new understanding, we use abduction to make the leap to a new theory based on a hunch or a guess.
In the Hedgehog-Fox Collaboration Loop, each animal owns half of the loop with two handoff points.
Theory, given its reductive and abstract nature, is clearly hedgehog territory. Theories rest upon Big Ideas that, in startup terms, exist in the form of visions or Thielian secrets. They roughly take the form, “By organizing around [insert Big Idea], we will [insert social mission].”
Many startup founders don’t think of their Big Ideas as “theories.” Their visions, to them, feel more like inevitabilities than hypotheses to be validated. However, reframing a hedgehog’s vision as a “theory” is a key step towards creating an environment where hedgehogs and foxes can collaborate. Theories are falsifiable. While leaders must be confident, they must have the humility to subject their brain children to testing.
Given a theory for how to enact world change, innovators must deduce the implications in a particular environment or market. Hedgehogs, masters of their theories, have a role to play in converting their ideas to actionable strategies. Foxes, however, are creatively advantaged in the planning process. By spotting cross-domain patterns, foxes can apply foreign techniques well-suited to the problem at hand. See this example of how an algorithm for interpreting telescope images was applied to breast cancer identification.
In evolved hedgehog-fox relationships, prediction is a point in the cognitive cycle where the fox can say “no.” As I mentioned above, Tetlock’s research shows that Foxes are better forecasters than Hedgehogs. Foxes can predict that a proposed action is so unlikely to succeed that it should not even be attempted, like a general refusing to accept the king’s order to attack at the wrong time. Foxes should decide which endeavors are mostly likely to yield the greatest benefit at the lowest cost.
Assuming acceptance of the Hedgehog-defined mission, foxes are best equipped to cleverly implement the plans. When reality hits, optimal action requires situational awareness and agility. Foxes can adjust tactics based on relatively unfiltered, malleable views of their changing environments.
The value of each action we perform is multiplied if we can learn from the results. Hedgehogs have a tendency to over-filter empirical evidence. Data that doesn’t fit the hedgehog’s model is ignored. Thus, we need foxes to own the inductive reasoning process. Foxes will pluck out interesting insights regardless of how conveniently they fit within the active paradigm.
Induction provides the ingredients for, as Sowa puts it, the “knowledge soup” in the cognitive cycle. The multi-model-minded foxes craft a knowledge soup that is diverse in flavors and textures. When the hedgehog re-enters the picture at the knowledge segment of the loop, they will encounter unfamiliar elements. The fox keeps the hedgehog’s thinking fresh.
In his discussion of hybridizing fox and hedgehog thinking, Venkatesh Rao writes that “The trick is to listen to everything, but also say no to almost everything… But once in a while, the fox half decides to pass something along to the hedgehog half, and the latter trusts the intel enough to change course.”
We saw earlier that foxes should say “no” to implementing some plans proposed by the hedgehog. Conversely, the hedgehog should say “no” to reacting to much of the new information supplied by the fox.
Yet, sometimes a foxy insight will penetrate the hedgehog’s stingy filter, exposing a flaw in the hedgehog’s model. In the collaboration loop, hedgehogs use abductive reasoning to form and revise their theories to accommodate the world illuminated by the fox, leaping to each new iteration based on a hunch or guess (see Joseph Kelly’s discussion of abductive reasoning and startups). A hedgehog’s passion for its Big Idea is a driving force of abduction. A singular focus propels the hedgehog to make necessary adjustments without losing the gravitational weight of the original vision.
Individuals and Teams
The hedgehog-fox collaboration loop can be applied to your own thought process or to a team workflow.
Venkatesh Rao invented the term “foxhog” to describe individuals who can combine fox and hedgehog thinking. A foxhog, Rao writes, is “A miraculous beast that turns uncertainty into certainty.”
One of my favorite examples of foxhog innovation is Gutenberg’s invention of the printing press, according to Arthur Koestler’s account in The Act of Creation. Gutenberg, in hedgehog fashion, fixated on a singular goal to print copies of the bible at scale. In fox fashion, he temporarily let go of his mission to enjoy the wine harvest where he stumbled upon the wine press. Gutenberg’s eureka moment occurred after his fox half told his hedgehog half about a mechanic of the winepress that could be applied to reliably print with movable type. I made a maze to tell this story.
I consider myself to be a natural hedgehog who is learning to be foxier over time.
My default mode is to survey the world with an eye towards exploiting new information for my goals. As a product manager, the products I work on become organizing principles for much of my cognition. Rao says that “hedgehogs have lots of books 5% read (judged by their cover), and a few 300% read (repeatedly and closely re-read).” This is me.
But I feel liberated to know that the best way to achieve my hedgehog dreams is, on occasion, to forget about them and enjoy the world as a fox, without an agenda. In doing so I can improve my judgment and eat yummier knowledge soup.
I also try to partner with foxes. Assuming we can’t all be perfect fox-hedgehog hybrids, we can at least aspire to apply both cognitive styles in aggregate on teams. To be productive, hedgehogs should recognize they need foxes and vice versa.
When you inject new software into an environment, it’s difficult to forecast what will transpire.
Product teams continually try to predict what will work. We predict that a certain new feature will increase conversion, that a targeted ad spend will bring critical mass, or that market conditions will change in our favor. We expect many of our bets to fail.
We inoculate ourselves from bad predictions with methodologies like Agile and Lean Startup. Measuring market signal with each iteration helps us quickly change course. If the brilliant new feature decreases conversion, we can remove it before our revenue plummets.
Minimizing losses from bad bets give us more opportunities to try things. However, the experience of failed attempts does not reliably lead to better predictions in the future. Faced with new information, we can overreact, under-react, misunderstand, forget, or turn a blind eye. Emotions and cognitive biases can cloud our judgment.
Learning can go even more haywire at an organizational level. Add group-think, knowledge silos, and corporate amnesia to the list of epistemological threats.
But imagine if we could systematically improve the accuracy of our company’s predictions. Then we could reliably increase our ability to make winning bets over time.
I was surprised to discover that a playbook for optimizing prediction accuracy already exists. It’s called Superforecasting. But it’s not being widely leveraged by product teams yet.
The first step to improving something is measuring it. In the book Superforecasting, Philip E. Tetlock and Dan Gardner describe a system for measuring the accuracy of predictions called the Brier score. In their words, “Brier scores measure the distance between what you forecast and what actually happened.”
To explain Brier scores, let’s consider weather forecasting.
Weather forecasting involves making probabilistic predictions like “There’s a 70% chance it will rain tomorrow.” If it doesn’t rain tomorrow, it doesn’t mean the forecast was wrong since it gave a 30% chance of no rain. This can be counter-intuitive since people have a false tendency to think that a forecast is wrong when a prediction that something is likely doesn’t end up happening.
In the terminology of the Brier score, a forecaster’s predictions are “well calibrated” if the things they say are X% likely to happen actually happen X% of the time. So if a meteorologist says “there’s 70% chance of rain tomorrow” one hundred times during the year, they would be perfectly calibrated if it rained 70 times and didn’t rain the other 30. If it had only rained 50 times, we would say the forecaster is over-confident. If it had rained 80 times, they’re under-confident. Over-confidence and under-confidence are the two forms of mis-calibration.
The other dimension of the Brier score is resolution. Resolution measures the decisiveness of a forecaster’s predictions. If a meteorologist says the chance of rain is 100% when it does rain and 0% when it doesn’t, they would have perfect resolution.
Forecasters can be perfectly calibrated with poor resolution. Let’s say a forecaster predicts a 50% chance of rain every day. Even if it does rain exactly half of the days, their resolution will be dinged for not elevating the chances on the days it does rain and lowering them on the days when it doesn’t. Poor resolution implies overly cautious predictions.
The Brier score combines calibration and resolution into a single accuracy metric. The lower the Brier score, the more accurate a forecaster’s predictions.
With a metric for prediction accuracy, we can scientifically investigate which factors lead individuals or teams to improve their foresight. Tetlock did exactly this to win a forecasting tournament administered by the United States Intelligence Community.
The forecasting tournament was created in the wake of the tragic false assessment that Iraq possessed WMDs. According to the IARPA website, the goal of the government program is to
… dramatically enhance the accuracy, precision, and timeliness of intelligence forecasts for a broad range of event types, through the development of advanced techniques that elicit, weight, and combine the judgments of many intelligence analysts.
The tournament, held over four years, asked participants to forecast questions like “Will Russia officially annex additional Ukrainian territory in the next three months?” and “In the next year, will any country withdraw from the eurozone?”
As measured by the Brier score, the predictions made by Tetlock’s group of “superforecasters” were 30% more accurate than the average for intelligence community analysts who has access to classified information.
While the highest performing forecasters are smart, Tetlock concluded that making consistently accurate predictions has more to do with how they think. Tetlock’s discoveries constitute a set of proven principles of good judgment, summarized here:
In philosophic outlook, [superforecasters] tend to be:
CAUTIOUS: Nothing is certain
HUMBLE: Reality is infinitely complex
NONDETERMINISTIC: What happens is not meant to be and does not have to happen
In their abilities and thinking styles, they tend to be:
ACTIVELY OPEN-MINDED: Beliefs are hypotheses to be tested, not treasures to be protected
INTELLIGENT AND KNOWLEDGEABLE, WITH A “NEED FOR COGNITION”: Intellectually curious, enjoy puzzles and mental challenges
REFLECTIVE: Introspective and self-critical
NUMERATE: Comfortable with numbers
In their methods of forecasting they tend to be:
PRAGMATIC: Not wedded to any idea or agenda
ANALYTICAL: Capable of stepping back from the tip-of-your-nose perspective and considering other views
DRAGONFLY-EYED: Value diverse views and synthesize them into their own
PROBABILISTIC: Judge using many grades of maybe
THOUGHTFUL UPDATERS: When facts change, they change their minds
GOOD INTUITIVE PSYCHOLOGISTS: Aware of the value of checking thinking for cognitive and emotional biases
In their work ethic, they tend to have:
A GROWTH MINDSET: Believe it’s possible to get better
GRIT: Determined to keep at it however long it takes
I love how Tetlock’s research puts data to philosophically rich concepts, like the hedgehog-fox dichotomy. The Ancient Greek poet Archilochus famously said that “a fox knows many things, but a hedgehog one important thing.” Tetlock categorized his forecasters into each persona.
Hedgehogs are fixated on one Big Idea; a single world view. They see everything through that lens. Tetlock describes this group in his forecasters pool:
Some were environmental doomsters (“ We’re running out of everything”); others were cornucopian boomsters (“ We can find cost-effective substitutes for everything”). Some were socialists (who favored state control of the commanding heights of the economy); others were free-market fundamentalists (who wanted to minimize regulation). As ideologically diverse as they were, they were united by the fact that their thinking was so ideological. They sought to squeeze complex problems into the preferred cause-effect templates and treated what did not fit as irrelevant distractions. Allergic to wishy-washy answers, they kept pushing their analyses to the limit (and then some), using terms like “furthermore” and “moreover” while piling up reasons why they were right and others wrong. As a result, they were unusually confident and likelier to declare things “impossible” or “certain.”
Foxes, in contrast, are open to many perspectives. They synthesize inputs from multiple models. Tetlock writes:
The [fox] group consisted of more pragmatic experts who drew on many analytical tools, with the choice of tool hinging on the particular problem they faced. These experts gathered as much information from as many sources as they could. When thinking, they often shifted mental gears, sprinkling their speech with transition markers such as “however,” “but,” “although,” and “on the other hand.” They talked about possibilities and probabilities, not certainties. And while no one likes to say “I was wrong,” these experts more readily admitted it and changed their minds.
The hedgehog vs. fox battle has been officially settled: foxes win, at least as far as forecasting goes. Tetlock found that “Foxes beat hedgehogs on both calibration and resolution. Foxes had real foresight. Hedgehogs didn’t.” Imagine an economist who relentlessly preaches austerity in the face of clear evidence showing the failure of austerity measures.
Yet, Tetlock doesn’t close the door on the utility of the hedgehog mindset. He raises the possibility that, while Hedgehogs suck at forecasting, their commitment to a single frame of thinking might lead them to ask more illuminating questions. Tetlock speculates:
The psychological recipe for the ideal superforecaster may prove to be quite different from that for the ideal superquestioner, as superb question generation often seems to accompany a hedgehog-like incisiveness and confidence that one has a Big Idea grasp of the deep drivers of an event. That’s quite a different mindset from the foxy eclecticism and sensitivity to uncertainty that characterizes superb forecasting.
This is a compelling hypothesis for how foxes and hedgehogs can collaborate to achieve great things.
Applying Superforecasting to Startups
Now that we’ve taken a quick tour of superforecasting, let’s consider how the system could be applied to product development.
Imagine your team at an eCommerce startup is finishing a project to redesign the checkout page. Right before you launch an A/B test with the new page, you pose the question:
“Will the purchase conversion rate of the new checkout page beat the current page by 10% for the 30 days after launch?”
Everyone on your team provides probabilistic forecasts ranging from 10% to 80%. The average prediction gives a 30% chance that the new page will increase conversion by 10%.
If you had seen these forecasts before you committed to doing the project, would you have moved forward with it?
It depends. Given the scale of your company, maybe a 30% chance to increase revenue by 10% is a good bet to make. But maybe other endeavors are more likely to succeed, so you start forecasting all proposed projects before you prioritize them.
And your forecasts need not be limited to the impact of potential projects. You can predict other factors related to the inner workings of your company, the technological landscape, or the market. Some examples:
“Will the engineering team finish refactoring our accounting system within two months of the project start date?”
“If we added 5 employees to the sales team in March, will our customer base grow by 20% by the end of the year?”
“Will the augmented/virtual reality market exceed $100 billion in revenue by 2020?”
“Will the word recognition accuracy rate of Google Voice reach 99% by the end of 2019?”
“Will the average health insurance deductible increase by 5% by the end of 2018?”
With accurate forecasts to questions like these, we’ll be more aware of our capabilities and steps ahead of the market. We’d have the foresight necessary for exceptional strategy.
One might argue that companies are already making forecasts in the form of cost-benefit analysis or financial models. But these formats are too ambiguous to help us improve our predictions over time. Tetlock and Gardner write:
The first step in learning what works in forecasting, and what doesn’t, is to judge forecasts, and to do that we can’t make assumptions about what the forecast means. We have to know. There can’t be any ambiguity about whether a forecast is accurate or not…
In cost-benefit analysis, you might forecast the impact of a project to be high. But what exactly does “high” mean? And in what time frame must the benefit occur? There are too many shades of gray. And maybe we’re two months after launch with little impact from the project. Does that mean the prediction is wrong or do we give it more time?
With a financial model, the projections will surely not be exactly right. So how can we determine if the model was correct? How close do the numbers need to be?
Cost-benefit analysis and financial models serve a real purpose, but they must be converted into unambiguous forecasts if we’re serious about improving our predictions over time.
So let’s say our company starts making a high quantity of unambiguous predictions. If we calculated our collective Brier score, we’d probably see that it leaves much to be desired.
Besides the fact that accurate forecasting is hard, I suspect many startup people are especially poor forecasters. There’s a hedgehoggy aspect of working on a product. It requires an almost irrational devotion to the product direction; a narrow lens that misses relevant opposing perspectives. We may be prone to overconfidence in predicting the success of product launches or improvement of their environmental conditions. Exuberant optimism is conducive to good work but runs counter to accurate forecasting.
Perhaps company staff is too biased to proficiently forecast their own future. The solution could be to insert 3rd party superforecasters into a company’s operations. It appears that Tetlock co-founded a commercial firm based on this idea. I’m compelled to try this out. In the future, asking questions of superforecasters could become ubiquitous, like running user tests or customer surveys. I anticipate there’d be some challenges in transferring domain knowledge, at least as far as some key questions are concerned. But this can be overcome.
But even without enlisting external superforecasters, applying the model is useful. These practices will be beneficial no matter what:
Creating unambiguous hypotheses for projects
Making predictions before you commit to resourcing
Looping back to old predictions to judge their accuracy
Turning these practices into habits is the goal of my side-project, Double-Loop.
A good way to summarize the potential of the superforecasting model for product teams is that it nudges us to become more like foxhogs, a hybrid animal invented by Venkatesh Rao. He explains:
The net effect of this personality profile is that the foxhog can live in a highly uncertain environment, and be responsively engaged within it, yet create a high-certainty execution context.
This ability to turn uncertainty into certainty is the key to … entrepreneurial effectiveness…
Accurate predictions are not a sufficient for success. You need to ask the right questions. This requires leaving no stone unturned while you push forward on a singular mission to achieve your product vision.
Some of the most interesting inventions spark new ways for people and machines to interact. One of the best examples I’ve seen of this, lately, is the iOS game Really Bad Chess.
Conventional chess programs come with adjustable difficulty meters controlling the prowess of your computer opponent. At some difficulty levels, you can easily win. But when you ramp up the computer’s smartness to a certain level, you’re consistently defeated.
For a game to be pleasurable, there needs to be the right amount of tension. When facing a computer opponent, you can play a competitive game by setting its ability to a level similar to your own. But this feels like fake tension, like playing against someone you know isn’t trying hard. A win doesn’t feel like a true victory.
For me, the superiority of computers over humans in chess has a deflating effect that even makes it less fun to play against other humans. What’s the point of getting better at chess when the involved mental processes would be better delegated to a computer program, like long division or spell checking?
Really Bad Chess alters the stale dynamic between people and chess AI through a novel handicapping mechanism. In the app, the AI never gets smarter. Instead, the variable is the relative strength of your starting pieces. This subtle change makes a transformative difference in the experience of playing chess against a computer.
When you play your first game, your randomly generated starting pieces are way stronger than the AI’s, like this.
Here, your job is to not blow your advantage, which itself provokes an interesting mindset. With each win against the computer, it’s starting pieces get stronger, until you’re staring at a disadvantage, like this.
Having played a lot of chess against computers, my demise felt inevitable when I first played from behind against the AI. How could I beat the sharp precision of a computer program when it simply didn’t have to mess up to win? But then I realized that the AI in Really Bad Chess is prone to egregious blunders.
Here we have the paradigm shift! Really Bad Chess jostled me out of the habit of comparing my intelligence to the machine. In this case, it’s no contest: I’m smarter than the machine. So instead of feeling like I need to match the calculating ability of a computer, Really Bad Chess motivates me to use my creativity to win as an underdog. The challenge is not to play perfect chess: it’s to exploit mistakes, like a hacker finding vulnerabilities in a system.
But how much smarter than the machine am I? It’s unclear. Part of the beauty of ReallyBad Chess is that there’s no clear wall. The more you win, the more severe your disadvantage becomes. But you could always discover new ways to leverage the AI’s weaknesses, making the impossible possible. Assuming the fallibility of one’s opponent leads to a growth mindset where there’s always hope of winning, regardless of the apparent odds.
Winning as a big underdog requires breaking with the orthodoxy of “good chess.” When playing against a conventional opponent, human or machine, one avoids making moves that lead to a clear negative consequence. E.g., you don’t knowingly move a piece to a square where it can be captured, unless it’s an intentional sacrifice for a larger strategic gain.
When facing a large disadvantage, however, your only shot at winning is when your opponent misses obvious opportunities. If you limit yourself to traditionally good moves, your range of options will be too limited to mount a comeback. In this sense, Really Bad Chess demands “bad” moves. There’s risk of getting punished, but this added freedom is what you need to tip the scale.
I love Really Bad Chess because it cultivates a mindset of innovation. Breakthroughs happen when you recognize that the standard playbook for a situation is not optimal for the actual world we’re in. Really Bad Chess takes you through an evolution of shedding the conventional chess playbook to make large adaptions to a subtly different reality.
Thanks to Tom Schmidt for sharing Really Bad Chess with me.
Product managers don’t create things the way people often think of creating. We usually aren’t writing the code or crafting pixel-perfect designs. Instead, our role is to synthesize diverse inputs from customers and contributors into a singular direction; a narrative that guides the team who hands-on builds the product.
On the surface, synthesis feels different from creation. Creation suggests introducing something new into the world while synthesis implies blending existing elements. However, Arthur Koestlerexplains in The Art of Creationthat synthesis is not only a key part of the creative process but near to its essence. He uses the term “bisociation” to describe the fusion of two otherwise disparate matrices of thought. Koestler demonstrates that the greatest creative breakthroughs spanning science, art, and humor can be understood in terms of bisociation. Gutenberg’s invention of the printing press bisociated the techniques of the wine-press and the seal. Kepler’s laws of planetary motion bisociated the previously unconnected fields of astronomy and physics. In my post, When the Tension Snaps in Silicon Valley, I used Koestler’s concept of bisociation to explain the hilarity of HBO’s Silicon Valley show.
In reading The Art of Creation, I’ve been struck by Koestler’s illustrations of the creative process, such as his representation of humor.
The underlying pattern, Koestler writes, “… is the perceiving of a situation or idea, L, in two self-consistent but habitually incompatible frames of reference, M1 and M2. The event L, in which the two intersect, is made to vibrate simultaneously on two different wavelengths, as it were. While this unusual situation lasts, L is not merely linked to one associative context, but bisociated with two.”
A simple example is a pun, where a single word takes on simultaneous meaning in two associative contexts. One can imagine similar diagrams representing other forms of discovery. In the above illustration, M1 could be physics, M2 could be astronomy, and L could be Kepler’s law of planetary motion.
Within each discourse, technique, or scientific paradigm, some patterns of behavior are intelligible while others are not. One can’t readily apply principles of military strategy when decorating a house for a party. Koestler uses chess as a metaphor to explain the dynamics of a thought matrix. Each matrix comes with a set of permissible moves, like the moves a rook or bishop can make on a chess board.
Another way of representing the permissible moves in a domain could be a maze, like the one I’ve drawn below. The walls of the maze dictate where you can wander and where you can’t. The red path illustrates one possible train of thought through the matrix.
Koestler’s visualizations of the creative process illustrate the fusing of two matrices of thought. Product managers, as I’ve explored starting with The Product Management Triangle, must synthesize input from three domains: business, users, and technology. Thus, I was inspired to visualize a model of trisociation, using mazes as metaphors for the constructs of each domain.
I created three maze forms; one for business, one for users, and one for technology. Each form represents a distinct way of thinking. The business dimension of product development requires an understanding of economics and the motivations of investors. Building a product that users love requires a mastery of behavioral psychology and design. Engineering the product requires a grasp of physical possibility and deep knowledge of the chosen technology. Each one of these areas is independently deep, worthy of graduate-level study. Each one has its own language, history, and perceived dead-ends.
Building a product requires trisociating a business model, market (user group), and technology. Unfortunately, it’s impossible to be certain that a certain blend will work together. The options are endless. It requires an abductive leap to choose the elements from each matrix for an attempted synthesis. The below illustration represents a possible area of focus within each domain; a business model, target market, and a technology.
In this visual model, the trisociation stage is set by placing the chosen matrix regions in the shape of a three-dimensional cube, like this:
Let the attempted synthesis begin! As Marty Cagan says, the goal of product is to create something that’s “valuable, usable and feasible.” As entrepreneurs know, achieving just two of these conditions is hard. Attaining all three at once is magical.
The below maze cube represents the success state — a connected loop touching all three matrices; tech, users, and business.
Reaching the success state requires many things to go right.
For one, you need to choose matrix regions that can be trisociated. Some attempted syntheses fail to generate durable progress, like the fusion of astronomy and religion.
Let’s say we picked a different region from the users matrix. Maybe instead of focusing our app on the pet owners market, we focus on clowns.
We could end up in a formation where it’s impossible to create a connected loop touching all facades of the trisociation cube. We’re blocked.
You can think of a pivot as swapping in one matrix region for another. Flickr swapped in photo sharing in place of gaming. Groupon swapped in shopping for political movements. With many of the same pieces, they moved from blocked to trisociated.
But even when you have compatible matrices fused together in your discovery process, it’s not guaranteed that you’ll find the path to synthesizing them. Freud took the right abductive leap to connect cocaine with medicine, but he missed the use case as a topical anesthetic.
As Koestler conveys in The Art of Creation, the person who originally juxtaposes two matrices is not always the one who makes the breakthrough connection. Just as Karl Koller, not Freud, is credited with the discovery of cocaine as a local anesthetic, Darwin wasn’t first to bisociate biology and the struggle for existence. But Darwin was the one who most rigorously established the connection and demonstrated its validity.
This last diagram shows a combination of matrices that can be connected with a loop, but the traveler is yet to discover the path. The explorer, with their own biases and tunnel vision, can’t know for sure that there is a solution. Tragically, they may give up, or pivot, before finding it.