When you inject new software into an environment, it’s difficult to forecast what will transpire.

Product teams continually try to predict what will work. We predict that a certain new feature will increase conversion, that a targeted ad spend will bring critical mass, or that market conditions will change in our favor. We expect many of our bets to fail.

We inoculate ourselves from bad predictions with methodologies like Agile and Lean Startup. Measuring market signal with each iteration helps us quickly change course. If the brilliant new feature decreases conversion, we can remove it before our revenue plummets.

Minimizing losses from bad bets give us more opportunities to try things. However, the experience of failed attempts does not reliably lead to better predictions in the future. Faced with new information, we can overreact, under-react, misunderstand, forget, or turn a blind eye. Emotions and cognitive biases can cloud our judgment.

Learning can go even more haywire at an organizational level. Add group-think, knowledge silos, and corporate amnesia to the list of epistemological threats.

But imagine if we could systematically improve the accuracy of our company’s predictions. Then we could reliably increase our ability to make winning bets over time.

I was surprised to discover that a playbook for optimizing prediction accuracy already exists. It’s called Superforecasting. But it’s not being widely leveraged by product teams yet.

Superforecasting

The first step to improving something is measuring it. In the book Superforecasting, Philip E. Tetlock and Dan Gardner describe a system for measuring the accuracy of predictions called the Brier score. In their words, “Brier scores measure the distance between what you forecast and what actually happened.”

To explain Brier scores, let’s consider weather forecasting.

Weather forecasting involves making probabilistic predictions like “There’s a 70% chance it will rain tomorrow.” If it doesn’t rain tomorrow, it doesn’t mean the forecast was wrong since it gave a 30% chance of no rain. This can be counter-intuitive since people have a false tendency to think that a forecast is wrong when a prediction that something is likely doesn’t end up happening.

In the terminology of the Brier score, a forecaster’s predictions are “well calibrated” if the things they say are X% likely to happen actually happen X% of the time. So if a meteorologist says “there’s 70% chance of rain tomorrow” one hundred times during the year, they would be perfectly calibrated if it rained 70 times and didn’t rain the other 30. If it had only rained 50 times, we would say the forecaster is over-confident. If it had rained 80 times, they’re under-confident. Over-confidence and under-confidence are the two forms of mis-calibration.

screen-shot-2017-02-15-at-12-14-45-am
Over-confidence

 

screen-shot-2017-02-15-at-12-14-39-am
Under-confidence
screen-shot-2017-02-15-at-12-14-23-am
Perfect calibration

The other dimension of the Brier score is resolution. Resolution measures the decisiveness of a forecaster’s predictions. If a meteorologist says the chance of rain is 100% when it does rain and 0% when it doesn’t, they would have perfect resolution.

Forecasters can be perfectly calibrated with poor resolution. Let’s say a forecaster predicts a 50% chance of rain every day. Even if it does rain exactly half of the days, their resolution will be dinged for not elevating the chances on the days it does rain and lowering them on the days when it doesn’t. Poor resolution implies overly cautious predictions.

The Brier score combines calibration and resolution into a single accuracy metric. The lower the Brier score, the more accurate a forecaster’s predictions.

With a metric for prediction accuracy, we can scientifically investigate which factors lead individuals or teams to improve their foresight. Tetlock did exactly this to win a forecasting tournament administered by the United States Intelligence Community.

The forecasting tournament was created in the wake of the tragic false assessment that Iraq possessed WMDs. According to the IARPA website, the goal of the government program is to

… dramatically enhance the accuracy, precision, and timeliness of intelligence forecasts for a broad range of event types, through the development of advanced techniques that elicit, weight, and combine the judgments of many intelligence analysts.

The tournament, held over four years, asked participants to forecast questions like “Will Russia officially annex additional Ukrainian territory in the next three months?” and “In the next year, will any country withdraw from the eurozone?”

As measured by the Brier score, the predictions made by Tetlock’s group of “superforecasters” were 30% more accurate than the average for intelligence community analysts who has access to classified information.

While the highest performing forecasters are smart, Tetlock concluded that making consistently accurate predictions has more to do with how they think. Tetlock’s discoveries constitute a set of proven principles of good judgment, summarized here:

In philosophic outlook, [superforecasters] tend to be:

CAUTIOUS: Nothing is certain

HUMBLE: Reality is infinitely complex

NONDETERMINISTIC: What happens is not meant to be and does not have to happen

In their abilities and thinking styles, they tend to be:

ACTIVELY OPEN-MINDED: Beliefs are hypotheses to be tested, not treasures to be protected

INTELLIGENT AND KNOWLEDGEABLE, WITH A “NEED FOR COGNITION”: Intellectually curious, enjoy puzzles and mental challenges

REFLECTIVE: Introspective and self-critical

NUMERATE: Comfortable with numbers

In their methods of forecasting they tend to be:

PRAGMATIC: Not wedded to any idea or agenda

ANALYTICAL: Capable of stepping back from the tip-of-your-nose perspective and considering other views

DRAGONFLY-EYED: Value diverse views and synthesize them into their own

PROBABILISTIC: Judge using many grades of maybe

THOUGHTFUL UPDATERS: When facts change, they change their minds

GOOD INTUITIVE PSYCHOLOGISTS: Aware of the value of checking thinking for cognitive and emotional biases

In their work ethic, they tend to have:

A GROWTH MINDSET: Believe it’s possible to get better

GRIT: Determined to keep at it however long it takes

I love how Tetlock’s research puts data to philosophically rich concepts, like the hedgehog-fox dichotomy. The Ancient Greek poet Archilochus famously said that “a fox knows many things, but a hedgehog one important thing.” Tetlock categorized his forecasters into each persona.

Hedgehogs are fixated on one Big Idea; a single world view. They see everything through that lens. Tetlock describes this group in his forecasters pool:

Some were environmental doomsters (“ We’re running out of everything”); others were cornucopian boomsters (“ We can find cost-effective substitutes for everything”). Some were socialists (who favored state control of the commanding heights of the economy); others were free-market fundamentalists (who wanted to minimize regulation). As ideologically diverse as they were, they were united by the fact that their thinking was so ideological. They sought to squeeze complex problems into the preferred cause-effect templates and treated what did not fit as irrelevant distractions. Allergic to wishy-washy answers, they kept pushing their analyses to the limit (and then some), using terms like “furthermore” and “moreover” while piling up reasons why they were right and others wrong. As a result, they were unusually confident and likelier to declare things “impossible” or “certain.”

Foxes, in contrast, are open to many perspectives. They synthesize inputs from multiple models. Tetlock writes:

The [fox] group consisted of more pragmatic experts who drew on many analytical tools, with the choice of tool hinging on the particular problem they faced. These experts gathered as much information from as many sources as they could. When thinking, they often shifted mental gears, sprinkling their speech with transition markers such as “however,” “but,” “although,” and “on the other hand.” They talked about possibilities and probabilities, not certainties. And while no one likes to say “I was wrong,” these experts more readily admitted it and changed their minds.

The hedgehog vs. fox battle has been officially settled: foxes win, at least as far as forecasting goes. Tetlock found that “Foxes beat hedgehogs on both calibration and resolution. Foxes had real foresight. Hedgehogs didn’t.” Imagine an economist who relentlessly preaches austerity in the face of clear evidence showing the failure of austerity measures.

Yet, Tetlock doesn’t close the door on the utility of the hedgehog mindset. He raises the possibility that, while Hedgehogs suck at forecasting, their commitment to a single frame of thinking might lead them to ask more illuminating questions. Tetlock speculates:

The psychological recipe for the ideal superforecaster may prove to be quite different from that for the ideal superquestioner, as superb question generation often seems to accompany a hedgehog-like incisiveness and confidence that one has a Big Idea grasp of the deep drivers of an event. That’s quite a different mindset from the foxy eclecticism and sensitivity to uncertainty that characterizes superb forecasting.

This is a compelling hypothesis for how foxes and hedgehogs can collaborate to achieve great things.

Applying Superforecasting to Startups

Now that we’ve taken a quick tour of superforecasting, let’s consider how the system could be applied to product development.

Imagine your team at an eCommerce startup is finishing a project to redesign the checkout page. Right before you launch an A/B test with the new page, you pose the question:

“Will the purchase conversion rate of the new checkout page beat the current page by 10% for the 30 days after launch?”

Everyone on your team provides probabilistic forecasts ranging from 10% to 80%. The average prediction gives a 30% chance that the new page will increase conversion by 10%.

If you had seen these forecasts before you committed to doing the project, would you have moved forward with it?

It depends. Given the scale of your company, maybe a 30% chance to increase revenue by 10% is a good bet to make. But maybe other endeavors are more likely to succeed, so you start forecasting all proposed projects before you prioritize them.

And your forecasts need not be limited to the impact of potential projects. You can predict other factors related to the inner workings of your company, the technological landscape, or the market. Some examples:

“Will the engineering team finish refactoring our accounting system within two months of the project start date?”

“If we added 5 employees to the sales team in March, will our customer base grow by 20% by the end of the year?”

“Will the augmented/virtual reality market exceed $100 billion in revenue by 2020?”

“Will the word recognition accuracy rate of Google Voice reach 99% by the end of 2019?”

“Will the average health insurance deductible increase by 5% by the end of 2018?”

With accurate forecasts to questions like these, we’ll be more aware of our capabilities and steps ahead of the market. We’d have the foresight necessary for exceptional strategy.

One might argue that companies are already making forecasts in the form of cost-benefit analysis or financial models. But these formats are too ambiguous to help us improve our predictions over time. Tetlock and Gardner write:

The first step in learning what works in forecasting, and what doesn’t, is to judge forecasts, and to do that we can’t make assumptions about what the forecast means. We have to know. There can’t be any ambiguity about whether a forecast is accurate or not…

In cost-benefit analysis, you might forecast the impact of a project to be high. But what exactly does “high” mean? And in what time frame must the benefit occur? There are too many shades of gray. And maybe we’re two months after launch with little impact from the project. Does that mean the prediction is wrong or do we give it more time?

With a financial model, the projections will surely not be exactly right. So how can we determine if the model was correct? How close do the numbers need to be?

Cost-benefit analysis and financial models serve a real purpose, but they must be converted into unambiguous forecasts if we’re serious about improving our predictions over time.

So let’s say our company starts making a high quantity of unambiguous predictions. If we calculated our collective Brier score, we’d probably see that it leaves much to be desired.

Besides the fact that accurate forecasting is hard, I suspect many startup people are especially poor forecasters. There’s a hedgehoggy aspect of working on a product. It requires an almost irrational devotion to the product direction; a narrow lens that misses relevant opposing perspectives. We may be prone to overconfidence in predicting the success of product launches or improvement of their environmental conditions. Exuberant optimism is conducive to good work but runs counter to accurate forecasting.

Perhaps company staff is too biased to proficiently forecast their own future. The solution could be to insert 3rd party superforecasters into a company’s operations. It appears that Tetlock co-founded a commercial firm based on this idea. I’m compelled to try this out. In the future, asking questions of superforecasters could become ubiquitous, like running user tests or customer surveys. I anticipate there’d be some challenges in transferring domain knowledge, at least as far as some key questions are concerned. But this can be overcome.

But even without enlisting external superforecasters, applying the model is useful. These practices will be beneficial no matter what:

  • Creating unambiguous hypotheses for projects
  • Making predictions before you commit to resourcing
  • Looping back to old predictions to judge their accuracy

Turning these practices into habits is the goal of my side-project, Double-Loop.

A good way to summarize the potential of the superforecasting model for product teams is that it nudges us to become more like foxhogs, a hybrid animal invented by Venkatesh Rao. He explains:

The net effect of this personality profile is that the foxhog can live in a highly uncertain environment, and be responsively engaged within it, yet create a high-certainty execution context.
This ability to turn uncertainty into certainty is the key to … entrepreneurial effectiveness…

Accurate predictions are not a sufficient for success. You need to ask the right questions. This requires leaving no stone unturned while you push forward on a singular mission to achieve your product vision.

Some of the most interesting inventions spark new ways for people and machines to interact. One of the best examples I’ve seen of this, lately, is the iOS game Really Bad Chess

Conventional chess programs come with adjustable difficulty meters controlling the prowess of your computer opponent. At some difficulty levels, you can easily win. But when you ramp up the computer’s smartness to a certain level, you’re consistently defeated.

For a game to be pleasurable, there needs to be the right amount of tension. When facing a computer opponent, you can play a competitive game by setting its ability to a level similar to your own. But this feels like fake tension, like playing against someone you know isn’t trying hard. A win doesn’t feel like a true victory.

For me, the superiority of computers over humans in chess has a deflating effect that even makes it less fun to play against other humans. What’s the point of getting better at chess when the involved mental processes would be better delegated to a computer program, like long division or spell checking?

Really Bad Chess alters the stale dynamic between people and chess AI through a novel handicapping mechanism. In the app, the AI never gets smarter. Instead, the variable is the relative strength of your starting pieces. This subtle change makes a transformative difference in the experience of playing chess against a computer.

When you play your first game, your randomly generated starting pieces are way stronger than the AI’s, like this.

screen696x696.jpeg

Here, your job is to not blow your advantage, which itself provokes an interesting mindset. With each win against the computer, it’s starting pieces get stronger, until you’re staring at a disadvantage, like this.

screen696x696 (1).jpeg

Having played a lot of chess against computers, my demise felt inevitable when I first played from behind against the AI. How could I beat the sharp precision of a computer program when it simply didn’t have to mess up to win? But then I realized that the AI in Really Bad Chess is prone to egregious blunders.

Here we have the paradigm shift! Really Bad Chess jostled me out of the habit of comparing my intelligence to the machine. In this case, it’s no contest: I’m smarter than the machine. So instead of feeling like I need to match the calculating ability of a computer, Really Bad Chess motivates me to use my creativity to win as an underdog. The challenge is not to play perfect chess: it’s to exploit mistakes, like a hacker finding vulnerabilities in a system.

But how much smarter than the machine am I? It’s unclear. Part of the beauty of Really Bad Chess is that there’s no clear wall. The more you win, the more severe your disadvantage becomes. But you could always discover new ways to leverage the AI’s weaknesses, making the impossible possible. Assuming the fallibility of one’s opponent leads to a growth mindset where there’s always hope of winning, regardless of the apparent odds.

Winning as a big underdog requires breaking with the orthodoxy of “good chess.” When playing against a conventional opponent, human or machine, one avoids making moves that lead to a clear negative consequence. E.g., you don’t knowingly move a piece to a square where it can be captured, unless it’s an intentional sacrifice for a larger strategic gain.

When facing a large disadvantage, however, your only shot at winning is when your opponent misses obvious opportunities. If you limit yourself to traditionally good moves, your range of options will be too limited to mount a comeback.  In this sense, Really Bad Chess demands “bad”  moves.  There’s risk of getting punished, but this added freedom is what you need to tip the scale.

I love Really Bad Chess because it cultivates a mindset of innovation. Breakthroughs happen when you recognize that the standard playbook for a situation is not optimal for the actual world we’re in. Really Bad Chess takes you through an evolution of shedding the conventional chess playbook to make large adaptions to a subtly different reality.


Thanks to Tom Schmidt for sharing Really Bad Chess with me.

 

Product managers don’t create things the way people often think of creating. We usually aren’t writing the code or crafting pixel-perfect designs. Instead, our role is to synthesize diverse inputs from customers and contributors into a singular direction; a narrative that guides the team who hands-on builds the product.

On the surface, synthesis feels different from creation. Creation suggests introducing something new into the world while synthesis implies blending existing elements. However, Arthur Koestler explains in The Art of Creation that synthesis is not only a key part of the creative process but near to its essence. He uses the term “bisociation” to describe the fusion of two otherwise disparate matrices of thought. Koestler demonstrates that the greatest creative breakthroughs spanning science, art, and humor can be understood in terms of bisociation. Gutenberg’s invention of the printing press bisociated the techniques of the wine-press and the seal. Kepler’s laws of planetary motion bisociated the previously unconnected fields of astronomy and physics. In my post, When the Tension Snaps in Silicon Valley, I used Koestler’s concept of bisociation to explain the hilarity of HBO’s Silicon Valley show.

In reading The Art of Creation, I’ve been struck by Koestler’s illustrations of the creative process, such as his representation of humor.Screen Shot 2016-08-27 at 4.52.31 PM

The underlying pattern, Koestler writes, “… is the perceiving of a situation or idea, L, in two self-consistent but habitually incompatible frames of reference, M1 and M2. The event L, in which the two intersect, is made to vibrate simultaneously on two different wavelengths, as it were. While this unusual situation lasts, L is not merely linked to one associative context, but bisociated with two.”

A simple example is a pun, where a single word takes on simultaneous meaning in two associative contexts. One can imagine similar diagrams representing other forms of discovery. In the above illustration, M1 could be physics, M2 could be astronomy, and L could be Kepler’s law of planetary motion.

Within each discourse, technique, or scientific paradigm, some patterns of behavior are intelligible while others are not. One can’t readily apply principles of military strategy when decorating a house for a party. Koestler uses chess as a metaphor to explain the dynamics of a thought matrix. Each matrix comes with a set of permissible moves, like the moves a rook or bishop can make on a chess board.

screen-shot-2016-10-23-at-11-21-49-pm

Another way of representing the permissible moves in a domain could be a maze, like the one I’ve drawn below. The walls of the maze dictate where you can wander and where you can’t. The red path illustrates one possible train of thought through the matrix.

userspath

Koestler’s visualizations of the creative process illustrate the fusing of two matrices of thought. Product managers, as I’ve explored starting with The Product Management Triangle, must synthesize input from three domains: business, users, and technology. Thus, I was inspired to visualize a model of trisociation, using mazes as metaphors for the constructs of each domain.

I created three maze forms; one for business, one for users, and one for technology. Each form represents a distinct way of thinking. The business dimension of product development requires an understanding of economics and the motivations of investors. Building a product that users love requires a mastery of behavioral psychology and design.  Engineering the product requires a grasp of physical possibility and deep knowledge of the chosen technology. Each one of these areas is independently deep, worthy of graduate-level study. Each one has its own language, history, and perceived dead-ends.

BUT.png

Building a product requires trisociating a business model, market (user group), and technology. Unfortunately, it’s impossible to be certain that a certain blend will work together. The options are endless. It requires an abductive leap to choose the elements from each matrix for an attempted synthesis. The below illustration represents a possible area of focus within each domain; a business model, target market, and a technology.

FocusedFrames.png

In this visual model, the trisociation stage is set by placing the chosen matrix regions in the shape of a three-dimensional cube, like this:

 

Fusion.png

Let the attempted synthesis begin! As Marty Cagan says, the goal of product is to create something that’s “valuable, usable and feasible.” As entrepreneurs know, achieving just two of these conditions is hard. Attaining all three at once is magical.

The below maze cube represents the success state — a connected loop touching all three matrices; tech, users, and business.

 

CompletedCube.png

Reaching the success state requires many things to go right.

For one, you need to choose matrix regions that can be trisociated. Some attempted syntheses fail to generate durable progress, like the fusion of astronomy and religion.

Let’s say we picked a different region from the users matrix. Maybe instead of focusing our app on the pet owners market, we focus on clowns.

WrongFocus.png

We could end up in a formation where it’s impossible to create a connected loop touching all facades of the trisociation cube. We’re blocked.

wrongmatrix

You can think of a pivot as swapping in one matrix region for another. Flickr swapped in photo sharing in place of gaming. Groupon swapped in shopping for political movements. With many of the same pieces, they moved from blocked to trisociated.

But even when you have compatible matrices fused together in your discovery process, it’s not guaranteed that you’ll find the path to synthesizing them. Freud took the right abductive leap to connect cocaine with medicine, but he missed the use case as a topical anesthetic.

As Koestler conveys in The Art of Creation, the person who originally juxtaposes two matrices is not always the one who makes the breakthrough connection. Just as Karl Koller, not Freud, is credited with the discovery of cocaine as a local anesthetic, Darwin wasn’t first to bisociate biology and the struggle for existence. But Darwin was the one who most rigorously established the connection and demonstrated its validity.

This last diagram shows a combination of matrices that can be connected with a loop, but the traveler is yet to discover the path. The explorer, with their own biases and tunnel vision, can’t know for sure that there is a solution. Tragically, they may give up, or pivot, before finding it.

connectable

 

I’ve heard my startup friends say they can’t watch HBO’s Silicon Valley because it’s “too true.” I feel the opposite. While it’s disconcerting to realize that startup people (like me) deserve to be mocked for certain behaviors, I love the show, in part, because it helps me understand the environment I’m immersed in. This sequence, in particular, has stuck with me.

 

I hadn’t put much thought into why I was so amused until I started reading Arthur Koestler’s 1964 book, The Act of Creation.” Koestler argues that comedy involves “bisociation,” the connecting of two seemingly incompatible frames of reference. Here’s how he illustrates the type of humor that evokes sudden laughter:

Screen Shot 2016-08-27 at 4.52.31 PM

The underlying pattern, Koestler writes, “… is the perceiving of a situation or idea, L, in two self-consistent but habitually incompatible frames of reference, M1 and M2. The event L, in which the two intersect, is made to vibrate simultaneously on two different wavelengths, as it were. While this unusual situation lasts, L is not merely linked to one associative context, but bisociated with two.”

In the case of the Silicon Valley sequence I posted above, the humor is in the ridiculousness of bisociating obscure technical innovations with notions of improving the world; e.g., “Making the world a better place… through paxos algorithms for consensus protocols.” What it means to “make the world a better place” is subjective, but the implication of world-level change is that it’s large and foundational. Many startup pitches, in contrast, sound trivial and niche to outsiders. Silicon Valley‘s mockery concisely illuminates an element of grandiose bullshit imbuing many startups: a poorly constructed facade of higher purpose.

Koestler writes:

The creative act of the humorist consisted in bringing about a momentary fusion between two habitually incompatible matrices. Scientific discovery … can be described in very similar terms–as the permanent fusion of matrices of through previously believed to be incompatible.

In short: “Comic discovery is a paradox stated–scientific discovery is paradox resolved…”

Many of the same paradoxes that Silicon Valley exposes through satire are the ones I’ve been wrestling with on this blog and in my career as a product manager. With my latest series of writing, starting with The Fundamental Tension in Product, I provide heuristics for staying committed to a world-changing vision while navigating the on-the-ground reality of real work and market signal. While the presenters in the TechCrunch Disrupt parody fall over flat in navigating the tension, some companies are able to pull it off.  Amazon has arguably “changed the world” through commoditizing cloud-based computing. “Cloud-based computing” might sound esoteric on the surface to a layman, but it’s had world-level impact by making the creation of websites significantly cheaper and easier.

The vision of many great tech companies can be expressed in the laughter-inducing syntax of the Silicon Valley pitches: “We’re going to make the world a better place, by [insert cryptic-sounding technical advancement].” Tesla’s vision has a different flavor with a similar structure. Elon Musk says he will save the world by creating cars for rich people. His master plan actually might hold up, but it could sound laughably delusional on the surface.

The paradox of a tech startup’s world-changing ambitions is just one of the interesting tensions illuminated by Silicon Valley. Another such tension is the one that exists between sales and engineering, illustrated by Pied Piper’s pivot to create The Box:

 

Starting with The Product Management Triangle, I’ve written in depth about the tensions that emerge around product between business, technology, and customers. These tensions can manifest as a clash between engineering and sales. Engineering generally wants to work on what’s unique, innovative, and artful. Sales wants the engineering team to build something that can be sold. Season 2 of Silicon Valley comically depicts an extreme resolution of the tension: simply letting the sales team dictate entirely what the engineering team builds. The result, of course, isn’t viable. The engineering team rebels. After the sales team gets fired, the pendulum swings to the other extreme. Pied Piper ships a product that was never tested on users other than engineers. The product was technologically brilliant but inaccessible to the general public.

I couldn’t think of a better stage-setting to explain the need for product management. It is precisely the job of the product manager to manage tensions like the one that led to The Box. The product manager must synthesize (or bisociate) seemingly conflicting inputs into a narrative that resonates across multiple planes of thought.

As much as I would like to escape the mockery of Silicon Valley, clearly I cannot. I’m hit dead on by Pied Piper’s temporary CEO with his “Conjoined Triangles of Success.”

silicon-valley-conjoined-triangles-of-success-poster-11-x-17-557_1000

 

In 2005, I took on my first product management job, developing data management tools for  CNET. Wisely, the company put me through agile training. While the agile philosophy resonated with me, I quickly fell into a waterfall trap in my first large project. The objective was to create a tool that would streamline the workflow between the editorial and data teams. Given complexities in how the two teams needed to work together, we designed an elaborate user interface to handle every last corner case. Stakeholders were happy with the design, so the developers started building it. Even though we worked in two-week sprints, the engineering effort spiraled out of control. Will to invest in the project dried up before we were finished implementing the planned design. We ultimately shipped a tool that cosmetically looked like our original design, but only portions of the user interface were functional.

While the tool was an embarrassment on many levels, it was not a complete failure. A feature that we built in the very beginning — the ability for the editorial team to publish a product review before a catalog record was processed by the data team — was a big efficiency gain over the previous state. If we had just shipped that one feature first and iterated on the design from there, we would have saved the company tons of money and delivered better results. Some of the corner cases we prepared for, in practice, never happened. The interface we handed over to the engineering team was insanely over-designed.

Fortunately, I only made that particular mistake once in my career. I started looking at all design assumptions with suspicion. After reading The Lean Startup, my conversion to skeptical product development was complete. I felt that every strategy needed to be rigorously tested and validated with users from the outset. Any small investment in an invalidated direction, in my mind, was fat that should be cut out of our operation. Lean was my religion.

When Steve Jobs died in 2011, I read Walter Isaacson’s biography. Jobs’ aggressive pushing of vision, against resistance from co-workers and the marketplace, seemed key to Apple’s unprecedented impact. To a lean extremist, the Steve Jobs phenomenon is pretty confusing. My bias was to be adaptive to marketplace signal, not to ignore it. As someone with my own dreams of changing the world, I admired Steve Jobs. Yet, I espoused a doctrine that made his success unintelligible. In retrospect, I’m amazed by how long I lived with this contradiction.

The first time I became consciously aware that lean product thinking had limits was from reading Product-Driven versus Customer-Driven by Venkatesh Rao in 2014. Rao pointed out that, with lean startup methodology, attempts to iterate at minimal cost towards market validation can take you away from your original vision, leading to what he describes as “visioning-process debt.” Rao writes: “While there is a vast repository of programmer knowledge about refactoring techniques, there is no comparable metis around pivoting and paying off visioning-process debt.”

Rao’s words about the limitations of lean startup felt profound to me, but I didn’t know what was tangibly on the other side. If lean startup wasn’t an optimal way to think about product, what was the optimal way?

Additionally, Rao convincingly claimed that “there’s an indispensable asshole at the top for product-driven” companies. Since iterating based on customer feedback is insufficient to achieve large impact, an asshole is necessary to get a whole company to go along with a vision that cannot be market tested along the way. Being a visionary product thinker is insufficient — you need to intimidate a group into following you, or so the argument goes.

This was troubling to me since, with a high autonomic response rate, I’m biologically wired to not be an asshole. Was I was facing a non-asshole ceiling in my ambitions as a product leader?

Time will tell whether I’m facing a non-asshole ceiling, but I’ve at least made progress grasping what’s on the other side of lean thinking. Rao’s recent piece offers a nice way of framing it: “fat thinking.” Of course! It’s only natural that fat thinking would follow lean thinking, similar to how philosophical movements zig zag between opposites, like structuralism leading to deconstruction.

Fat thinking, however, does not replace lean thinking. The optimal product crafting process must balance both modes, recognizing the correct context for each one.

Here’s how Rao characterizes fat thinking:

When you get away from lean regimes, you’re running fat: you’ve deployed more capital than is necessary for what you’re doing, and you’re deriving a return from it at a lower rate than the maximum possible.

So why would you ever deploy more capital than is necessary? Consider the decision to build a platform. A lean extremist would say that you shouldn’t build a platform until you’re scaling an app with proven demand. Otherwise, you risk creating a platform that no one will use. Why not first find out if people give a shit before doing the heavy lifting?

The fat thinker’s answer is that building a platform can enhance your ability to discover a killer app. If you start by only making a platformless prototype app, if it fails user validation, only minor iterations come easy. There’s no way to quickly manifest the essence of the original vision in widely varying forms.

Platform fat can make you leaner in attacking specific opportunities. I saw this at my previous company. We had a platform for rating the sustainability of products. While we started with a consumer focus, we later discovered demand for an enterprise app. The platform made validating this direction cheap. If the original consumer app was only a prototype, we would have had to almost start from scratch testing the enterprise app. Instead, it only required bolting on a new UI. As it turned out, we ultimately had to rebuild the platform, but we did so with a paying partner.

Building a platform before you’ve validated a killer app requires faith in a vision. In his 1896 paper The Will to Believe, philosopher William James explains why this type of untested belief is not in conflict with a scientific sensibility. James argues that there are cases, like religion, where you need to believe a hypothesis to find out if it’s true. Building a platform-first startup is such a case. You’re not going to build an unvalidated platform if you’re skeptical of its value.

This gets us to another difference between lean and fat thinking. In lean startup methodology, we’re predominantly skeptical of all hypotheses. A direction must be proven before it’s adopted. In fat thinking, we cannot be skeptical. We must believe that we are on the right course. Skepticism will cause failure. (My last post explains this point in more detail.)

In one sense, the distinction between lean and fat thinking maps to Karl Popper’s separation of science and pseudoscience. While it’s been debated by others, Popper argues that the defining characteristic of science is that its hypotheses must be falsifiable. Falsifiable hypotheses are key to lean startup methodology. If an experiment cannot result in a failure, what can you learn from running it? A pseudoscience, in contrast, is based on confirmation. Astrology has traction because so many happenings can be interpreted to confirm its claims. The visions that motivate fat thinking share pseudoscience’s potential for wide interpretation, but towards positive ends. A powerful product vision attracts believers. Google’s mission “to organize the world’s information” or Facebook’s mission “to make the world more open and connected” bring people under the tent in an astrology-like manner.

However, the believers attracted by companies like Apple, Facebook, and Google must also be scientists. Lean and fat thinking must be combined under the same roof. Elon Musk’s master plan for Tesla is a perfect illustration. Musk believes that his company can help prevent the collapse of civilization by driving sustainable changes through the economy (requires fat thinking). Doing so requires using limited resources to win in specific markets that exist now, starting in the high-end car market  (lean thinking).

Here’s my summary heuristic for how to combine lean and fat thinking:

  1. Use lean thinking to adapt your creations to the world as it is now.
  2. Use fat thinking to conceptualize how you will change the world or anticipate how the world will change.

Lean thinking is how you win markets, tap into behavioral patterns, adapt to the economy, and create a killer app with graphs that go up and to the right. Fat thinking is how you create markets, change human behavior, transform the economy, and build a platform that births a multitude of killer apps.

While much systematic thought has been published on lean manufacturing and thinking, there’s less out there on fat thinking. Today, fat thinking is primarily visible from (A) charismatic “assholes” who easily attract followers to their vision or (B) creationist-style thinkers who don’t believe that science has a role in product design.

To make fat thinking a fruitful enterprise for scientifically-minded non-assholes, we need to create new systems. Rao’s “leak before failure” notion is the beginning of one such system. Warldley value chain maps are another example. Both ideas provide justifications for adding fat to your creations in anticipation of world change.

To conclude, I’ve reposted the diagram from my post, The Fundamental Tension in Product.  The inner loops describe how to stay oriented in your current environment. The goal here is to stay lean. Quick cycles translate to market advantage. Your fat should come from the outer loops, your vision for how the world should or will be.

ProductDriven

 

Sometimes we create as a form of play, for fun, or to satisfy unguided curiosity. Other times we create with a purpose, an intent to move the world in a direction we want it to go.

As I explain in my post The Fundamental Tension in Product, there is a paradoxical nature to creating for change. Changing the world seems to simultaneously require bending the world and bending to it. Products, artworks, and political movements achieve minimal impact, regardless of their genius, when they are too far away from public consciousness, economic realities, or technological feasibility.

To fit our work with current reality, we are compelled to adopt a scientific sensibility; a mindset of running experiments and dispassionately discarding ideas shown to be false. We take on the perspective of the skeptic, looking at our own intuitions with suspicion. We require seeing evidence before believing hypotheses to be true.

Within your organization or movement, it may be the case that you are a minority voice advocating for a more scientific approach. Frustrated by watching investment in seemingly arbitrary directions, perhaps you’re begging your peers to read The Lean Startup. But for now, let’s assume the value of testing ideas is a given.

As the pendulum of our creative process swings towards science, a new risk emerges. With the scientific approach, we methodically weed out bad assumptions, continuously conforming our creations to living behavioral patterns. However, as our creations transform, after countless iterations and discoveries, we risk losing touch with the original vision we sought to make real at the outset — we crossover from changing the game to mastering its rules. Venkatesh Rao characterizes this phenomenon as visioning-process debt.

A scientific approach, while optimal in many scenarios, is insufficient for changing the world. The most impactful creations seem to come, in part, due to an unwavering vision that willfully ignores the apparent workings of society.

It might appear on the surface that holding an “unwavering vision” is incompatible with the scientific sensibility, but this is not necessarily the case. If you were to stay committed to a hypothesis that was proven false, there is surely a problem. But this does not imply that a hypothesis must be proven true before you invest in pursuing it.

In his 1896 paper, The Will to Believe, pragmatist philosopher William James argues for the necessity of believing some hypotheses before you have proof. His overall agenda is to argue for the compatibility of religion and the scientific sensibility. Religion is the archetypal untestable hypothesis. Belief in an afterlife cannot be falsified until you’re dead.

But there are less extreme cases that require believing hypotheses before they’ve been tested. Consider what it’s like to work with a team. You must believe that other members of your team will be genuinely committed to the same cause. If you require evidence of commitment before you contribute your part, your skepticism will cause the hypothesis to fail. Your team members, seeing your disengagement, will also disengage.

James writes on religion and other cases where a hypothesis must be believed prior to testing:

We cannot escape the issue by remaining sceptical and waiting for more light, because, although we do avoid error in that way if religion be untrue, we lose the good, if it be true, just as certainly as if we positively chose to disbelieve… Scepticism, then, is not avoidance of option; it is option of a certain particular kind of risk. Better risk loss of truth than chance of error,-that is your faith-vetoer’s exact position. He is actively playing his stake as much as the believer is; he is backing the field against the religious hypothesis, just as the believer is backing the religious hypothesis against the field. To preach scepticism to us as a duty until ‘sufficient evidence’ for religion be found, is tantamount therefore to telling us, when in presence of the religious hypothesis, that to yield to our fear of its being error is wiser and better than to yield to our hope that it may be true.

While for me, as James puts it, religion is a dead hypothesis, I think his sentiment sheds light on how great creators persevere against the social grain. On the one hand, science is underused in how people strive to spark change. Science should creep into more and more creative domains. On the other hand, experiencing the power of the scientific sensibility can bias us against less testable directions. I designed the below maze to illustrate this bias.

thumb_IMG_3108_1024.jpg

After entering the maze, you quickly reach a fork in the road. If your goal is to efficiently solve the maze, you will likely look ahead in both directions before committing to a path.

If your eyes follow the path on the right, you immediately see avoidable dead-ends. This path is intended to evoke the scientific sensibility, a process centered around avoiding falsehoods. The hope is that methodically avoiding errors will lead to the discovery of a meaningful new truth, but this is not guarenteed.

The left path, in contrast, winds around with no foreseeable dead-ends. But it very well could ultimately lead to a dead-end. Finding out will take time. Heading this direction requires a “leap of faith.” You will get no feedback, positive or negative, until you reach victory or failure. It’s intended to be a metaphor for staying committed to an unwavering vision, regardless of evidence, or lack thereof.

So which road do you feel inclined to pick? Either one comes, as James argues, with potential upside and risk. However, as the scientific sensibility increasingly takes over our mindset, we risk becoming biased to take the path on the right, overlooking the opportunity presented on the left.