Skip to content
· 16 min read

Product Sense & Intuition

Developing the taste and judgment that separates great PMs from good ones.

I once watched a product manager spend six weeks building a feature that nobody asked for, nobody needed, and nobody used. The feature was technically impressive. The engineering was clean. The spec was thorough. And on launch day, usage flatlined at exactly zero.

That PM wasn’t bad at execution. They were bad at something harder to name: the ability to look at a problem and know which solutions users would actually care about. Call it product sense, product intuition, product taste. Whatever label you prefer, it’s the skill that separates the PMs who ship features from the PMs who ship outcomes.

And it’s not magic. It’s not innate talent. It’s a trainable cognitive skill with identifiable components, and I’m going to break down exactly what those components are and how to build them.

The thing nobody tells you about great PMs

Here’s a question that’s been bugging me for years. Why do some PMs consistently ship products that users love, while other PMs with equivalent intelligence, training, and resources consistently ship products that… exist?

I’ve watched this pattern across four companies and probably sixty product managers. The gap isn’t intelligence. It’s not work ethic. It’s not even domain expertise, though that helps. The gap is something more fundamental: the ability to make correct product decisions under ambiguity.

Shreyas Doshi, who led product at Stripe, Yahoo, and Twitter, defines product sense as “the ability to usually make correct product decisions even when there is significant ambiguity.” That word usually is doing a lot of work. Nobody gets it right every time. But the great PMs get it right often enough that, over a career, their hit rate is noticeably higher than everyone else’s.

The obvious follow-up: what are they actually doing differently?

Pattern recognition, not pattern matching

The first component of product sense is pattern recognition. Not pattern matching. The distinction matters.

Pattern matching is what happens when you read a case study about how Spotify built Discover Weekly and then try to replicate that exact approach for your B2B SaaS dashboard. You’ve matched the surface pattern (personalization drives engagement) without recognizing the deeper structure (Spotify’s users have a discovery need that maps to a specific consumption behavior that doesn’t exist in your product’s context).

Pattern recognition is what happens when you’ve seen enough products succeed and fail that you can identify the structural features that predict outcomes. You start noticing things like:

Pattern What it looks like Why it matters
Activation energy How much effort users need to reach the first moment of value Products that require 12 steps before the user gets anything useful have predictably high drop-off
Frequency mismatch Building for daily use when the real use case is monthly Instagram works because sharing photos is a daily behavior. A mortgage calculator doesn’t need daily engagement features.
Solution shape Whether the solution matches the shape of the problem Slack solved the “context switching between email and chat” problem by making a chat product. It didn’t solve it by building better email.
Emotional endpoint What users feel after using the product Duolingo users feel a sense of accomplishment after 5 minutes. That feeling, not the gamification mechanics, is the product.

The key insight here is that pattern recognition requires volume. You have to have seen enough products, enough launches, enough failures, enough pivots to build an internal database of structural patterns. There’s no shortcut for this. Reading case studies helps. But watching products in the wild, using hundreds of products, noticing what works and asking why, noticing what doesn’t and asking why not: that’s the real training data.

Marty Cagan at Silicon Valley Product Group has been making this point for years. He calls it “product knowledge” and argues it’s built through direct exposure, not through frameworks or processes. You can’t learn product sense from a book any more than you can learn to cook from reading recipes. At some point, you have to stand at the stove.

The user psychology layer

The second component is less visible and harder to train: understanding user psychology at a level deeper than personas and journey maps.

Most PMs understand users at the behavioral level. They know what users do. They track clicks, page views, time on page, conversion rates. This is necessary but insufficient.

The PMs with strong product sense understand users at the motivational level. They know why users do what they do. And more importantly, they know why users don’t do what the PM expected them to.

This is where Clayton Christensen’s Jobs-to-be-Done framework becomes genuinely useful, as opposed to just being a thing people name-drop in product reviews. The framework’s core insight is simple: people don’t buy products. They hire products to make progress in their lives. The milkshake isn’t competing with other milkshakes. It’s competing with bananas, bagels, and boredom on a morning commute.

But knowing the framework isn’t the same as being able to apply it. Application requires empathy that goes beyond “I interviewed 12 users and here are the themes.” It requires the ability to sit with a user’s frustration, understand their context, feel the friction they feel, and then translate that understanding into product decisions.

I’ll give you a concrete example. When Duolingo’s team noticed that their completion rates were terrible for long-form lessons, the obvious product decision was “make the lessons better.” More engaging content, better pedagogy, clearer explanations. A mediocre PM would have gone down that path.

What Duolingo actually did was recognize that the problem wasn’t lesson quality. The problem was that their users were busy humans with five-minute windows of attention on a commute or in a waiting room. The job-to-be-done wasn’t “learn Spanish.” It was “feel like I’m making progress on learning Spanish in the tiny gaps in my day.” That reframe led to their micro-lesson format, which led to 500 million downloads.

The PM who sees that distinction has strong user psychology instincts. The PM who jumps to “better lessons” doesn’t.

Taste is real, and it’s not what you think

This is where the conversation gets uncomfortable, because the tech industry has a weird relationship with the concept of taste.

On one hand, everyone admires Steve Jobs for his taste. The man looked at a prototype iPhone nine months before launch and told his team to scrap the case design because it “competed with the display instead of getting out of the way.” That’s taste. He insisted on a single button for navigation when every other phone had a physical keyboard. That’s taste. He demanded they manufacture the screen with scratch-resistant glass that didn’t exist yet. That’s taste (and some stubbornness, but the line between taste and stubbornness is thinner than people admit).

On the other hand, nobody wants to say that taste matters, because it sounds elitist and undemocratic and unmeasurable. It sounds like you’re saying some people just have it and others don’t. And in a discipline that’s been fighting for decades to be taken seriously as a rigorous, data-driven function, admitting that taste plays a role feels like going backward.

But taste is real. And it’s not what most people think it is.

Taste isn’t “I like blue buttons.” Taste is the accumulated judgment that comes from deeply understanding a domain, its users, its constraints, and its possibilities. Taste is the ability to look at twenty possible solutions and feel which three are worth exploring before you’ve done the analysis. Taste is the compression algorithm your brain runs on thousands of prior observations to generate a rapid first filter.

Here’s what makes taste useful in product management specifically:

Without taste With taste
Every decision requires full analysis Many decisions can be pre-filtered based on accumulated judgment
Feature specs look technically correct but feel lifeless Feature specs reflect an understanding of what the product should feel like
PRDs specify behavior but not emotional outcome PRDs specify how the user should feel at each step
Reviews focus on “does it work” Reviews focus on “does it feel right”
Ship rate is high, impact rate is low Ship rate might be lower, but impact rate is higher

The uncomfortable truth: taste compounds. PMs with taste make better micro-decisions (this label should say X, not Y; this flow should skip this step; this error state should feel helpful, not punitive), and those micro-decisions accumulate into products that feel coherent and considered, even though no single decision was transformative on its own.

How to actually develop product sense

So we’ve identified the components: pattern recognition, user psychology, and taste. The question everyone asks is: how do you train them? And the honest answer is that training product sense is more like training a muscle than learning a formula. It requires deliberate practice over time. But there are specific exercises that work.

Exercise 1: Product teardowns (but done right)

Most people do product teardowns wrong. They open an app, poke around for ten minutes, and say “the onboarding is confusing” or “the design is clean.” That’s surface-level observation. It doesn’t build product sense.

A useful teardown answers specific questions:

1. What is this product's core job-to-be-done?
2. What is the first moment of value, and how long does it take to reach?
3. What is the retention mechanism? Why would someone come back tomorrow?
4. What trade-offs did the team make, and what were they optimizing for?
5. Where does the product feel "off"? What micro-decisions feel wrong?
6. What would I change, and what would the second-order effects be?

That last question is critical. Second-order thinking is where product sense lives. If you change the onboarding to be shorter, what do you lose? If you add a feature, what existing behavior does it interfere with? If you remove a step, what confusion does that create downstream?

I try to do one of these per week. Not a full written report. Just a structured mental walk-through while I’m using something new. After a year, you’ve got 50+ products in your mental database. After five years, you’ve got 250+. That volume is what builds the pattern recognition.

Exercise 2: Decision journals

This is the exercise that most directly builds the taste component. Every time you make a meaningful product decision, write down three things:

  1. What you decided
  2. Why you decided it (the real reason, not the rationalized reason)
  3. What you expect to happen

Then, 30-60 days later, go back and compare your prediction to reality. Were you right? If not, why not? What did you miss? What assumption was wrong?

Over time, this calibrates your judgment. You start to notice your systematic biases. Maybe you consistently overestimate how much users care about visual polish. Maybe you consistently underestimate the friction of adding one more step to a flow. Maybe you’re great at predicting consumer behavior but terrible at predicting enterprise buyer behavior.

This self-knowledge is product sense. It’s knowing where your instincts are reliable and where they aren’t.

Exercise 3: Reverse engineering competitor decisions

Pick a product you admire. Pick a specific feature or design choice. Then reverse-engineer the decision process that produced it.

Don’t just ask “why did they do this?” Ask “what other options did they consider and reject?” Try to reconstruct the decision tree. What were the constraints? What were the trade-offs? What data might they have had? What hypotheses were they testing?

When I reverse-engineer a product decision, I try to come up with at least three alternative approaches the team likely considered. Then I reason about why each was rejected. This exercise forces you to think in trade-off space, which is where product sense actually lives.

For example, take Apple’s decision to remove the headphone jack from the iPhone 7. The surface-level analysis is “courage” (their word) or “greed” (the internet’s word, since it drove AirPods sales). The deeper analysis involves understanding the trade-off space:

  • More internal volume for battery and haptic engine
  • Improved water resistance with one fewer port
  • Strategic push toward a wireless accessory ecosystem
  • Cost to user experience (dongle hell, can’t charge and listen simultaneously)
  • Timing relative to Bluetooth audio quality improvements

A PM with product sense doesn’t just evaluate the decision. They understand the shape of the decision space and can identify which factors likely dominated.

Exercise 4: Exposure therapy

This one is simple but powerful. Use products outside your normal categories. If you build B2B SaaS, spend time with consumer apps. If you build mobile apps, use desktop productivity tools. If you build for developers, use products designed for non-technical users.

Cross-domain exposure builds a different kind of pattern recognition. You start seeing structural similarities between products that serve completely different users. The way Figma handles collaborative editing has lessons for collaborative document tools. The way TikTok’s algorithm surfaces content has lessons for B2B content recommendation. The way Stripe’s documentation reduces developer friction has lessons for any product with a complex setup process.

The most creative product decisions I’ve seen come from PMs who imported patterns from adjacent or distant domains. Not copying features, but recognizing structural solutions.

The anti-patterns: what kills product sense

It’s worth talking about what suppresses product sense, because some organizational environments actively destroy it.

Anti-pattern 1: The feature factory

When a product team is measured by output (features shipped) rather than outcomes (problems solved), product sense atrophies. PMs stop asking “should we build this?” and start asking “how fast can we ship this?” The judgment muscle goes unused because judgment isn’t valued. Only velocity is.

Spotify’s Henrik Kniberg coined this term, and it resonated precisely because so many PMs recognized their own teams in it. In a feature factory, the PM’s job is to write specs and manage timelines. The actual product decisions (what to build and why) are made by executives or stakeholders, and the PM is just a translation layer. This is where good PMs go to have their product sense slowly die.

Anti-pattern 2: Data-only decision making

I’m going to be careful here because data is important and I’ll write more about this in a future post. But there’s a specific failure mode where teams refuse to make any product decision without “data to support it.” This sounds rigorous. In practice, it paralyzes the team on decisions where data doesn’t exist yet.

The problem with demanding data for every decision is that the most interesting product opportunities exist in spaces where there is no data. There’s no data on whether users want a product that doesn’t exist. There’s no data on whether a novel interaction pattern will feel intuitive. There’s no data on whether a bold redesign will retain existing users while attracting new ones.

Product sense is what you use in the absence of data. If your organization doesn’t allow decisions without data, you’re selecting for incremental improvements and against breakthrough products.

Anti-pattern 3: Consensus-driven product decisions

When every product decision must achieve consensus among multiple stakeholders, the resulting product reflects nobody’s vision. It reflects the average of everyone’s opinions, which is almost always mediocre.

The best products I’ve used feel opinionated. Basecamp is opinionated about simplicity. Linear is opinionated about speed. Superhuman is opinionated about keyboard shortcuts and email workflow. Notion is opinionated about blocks as a universal primitive.

These products feel coherent because they reflect a clear point of view. Consensus produces products that feel like compromise. And compromise is the opposite of taste.

This doesn’t mean PMs should be dictators. It means that somewhere in the decision chain, someone needs to have the authority and the product sense to say “this is the direction, here’s why, and I’m accountable for the outcome.”

What separates great PMs from good ones

Let me synthesize all of this into a specific claim: the difference between great PMs and good PMs is not intelligence, work ethic, or even domain expertise. It’s calibrated judgment under ambiguity.

Good PMs can execute a clear roadmap. They can write specs, manage stakeholders, and ship features on time. These are important skills. But they’re table stakes.

Great PMs can look at an ambiguous situation (unclear user needs, conflicting data, multiple possible approaches, uncertain market dynamics) and make a decision that turns out to be right more often than it should be. They can do this because they’ve built three things:

  1. A deep library of patterns from years of active product observation
  2. Genuine user empathy that goes beyond behavioral analytics to motivational understanding
  3. Calibrated taste that’s been tested against reality and refined through feedback loops

Notice that all three of these are experiential. You can’t shortcut them with frameworks or credentials. An MBA doesn’t give you product sense. A CIRCLES framework doesn’t give you product sense. They give you structure, which is useful, but structure without judgment produces the product equivalent of technically-correct-but-soulless output.

This is why junior PMs with strong product sense often outperform senior PMs without it. The junior PM who has spent years obsessively using, analyzing, and thinking about products brings a richer mental database to every decision than the senior PM who’s been managing roadmaps and stakeholder meetings for a decade but hasn’t actively engaged with products outside their own domain.

The uncomfortable truth about hiring for product sense

Since I’m being direct: most PM interviews don’t test for product sense. They test for structured thinking (which is necessary but insufficient), communication skills (same), and the ability to generate a plausible-sounding analysis of a product question in real time (which is a performance skill, not a product skill).

The PMs I’ve seen with the strongest product sense often struggle in traditional PM interviews because their answers are specific rather than structured. They say things like “that’s the wrong feature because the retention problem is actually a activation problem, here’s why” instead of walking through a MECE framework. They have strong opinions backed by deep observation, which can read as “not structured enough” to an interviewer who’s evaluating against a rubric.

Meta and Google use dedicated “product sense” interview rounds, which is a step in the right direction. But even these tend to reward the ability to think through a product question on the spot, which is a different skill from the ability to make consistently good product decisions over months and years.

If I were hiring a PM and could only evaluate one thing, I’d ask them to walk me through a product decision they got wrong, what they learned from it, and how it changed their judgment. The quality of that reflection tells you more about their product sense than any hypothetical product question ever could.

Building the muscle: a 90-day program

If you’re a PM who wants to deliberately build product sense, here’s what I’d suggest for the next 90 days:

Weeks 1-4: Build the observation habit

  • Do one structured product teardown per week (using the six questions above)
  • Start a decision journal. Log every meaningful product decision you make.
  • Install and actively use 3 products completely outside your domain

Weeks 5-8: Build the empathy muscle

  • Watch 5 usability sessions (your own product or others via UserTesting clips)
  • Do 3 Jobs-to-be-Done interviews with real users of your product
  • Read “The Mom Test” by Rob Fitzpatrick (the best book on asking users questions without getting lied to)

Weeks 9-12: Build the judgment loop

  • Review your decision journal entries from weeks 1-4. How did your predictions compare to reality?
  • Do 3 reverse-engineering exercises on products you admire
  • Write a “product beliefs” document: a list of strong opinions you hold about what makes products good. Revisit it quarterly.

This program won’t transform you in 90 days. But it will start the flywheel. Product sense compounds. Every observation, every decision logged, every prediction checked against reality adds to your mental database. And that database is what you’re drawing on every time you sit in a product review and feel something is off.

Because that’s ultimately what product sense is. It’s not a mystical sixth sense. It’s not innate talent reserved for a chosen few. It’s the result of thousands of hours of active, deliberate engagement with the problem of building things that people want to use. The PMs who have it aren’t smarter. They’ve just been paying closer attention, for longer, and they’ve been honest with themselves about what they got wrong along the way.

The good news is that paying attention is free. And you can start today.

Continue Reading

Ontology of Information

Next page →